US
Roe v Wade overturned – Tech companies attempt to neutralise effect for employees
… As most readers will know, the US Supreme Court has overturned Roe v Wade, meaning that in the US, there is no longer a constitutional right for a woman to obtain an abortion. This means that states can pass laws to ban abortions, should they wish. For example at the moment, there is a temporary injunction on the abortion ban in Texas whereas in Tennessee, the ban is enforceable after six weeks of pregnancy (pregnancy can easily go unnoticed at such an early stage). Tech companies were quick to react, Alphabet (Google) and Apple, (alongside other non-tech companies) said they will pay for employees to travel and receive reproductive care if they live in states where abortion is banned. Presumably there will be confidential channels by which employees can apply. These companies are potentially risking liability because in some states you could be liable if you assist a woman to obtain an abortion. As I understand it, Meta and Microsoft’s offer of support were premised on the provision of assistance being lawful.
Artificial Intelligence
The UK government says copyright will be amended to promote progress in AI
…In its press release, the government said that data mining will not require permission from the copyright owner by anyone with lawful access to the material even if it is protected by copyright. Data mining is an important technique used for example, in training AI in which a software (or a bot) is used to collect and analyse material (eg. internet) for patterns, trends and other useful information. The aim is to make UK a location of choice for data mining and AI development. The government says that it seeks to use Brexit as an opportunity to make its own laws that is pro-technological progress.
What’s the EU position?
It is interesting to bear in mind that there are data mining exceptions under EU’s Copyright Digital Single Market Directive (applicable across the EU):
- Research organisations / cultural heritage organisations are allowed to data mine for a scientific purpose (even if they are carrying out research with a business under certain partnerships). Importantly it is not possible to restrict the ability of research organisations / cultural heritage organisations to data mine provided it has lawful access to the material.
- In cases of data mining by a non research organisation /cultural heritage organisation or for a non scientific purpose, it is possible to data mine without permission if there are no express reservations not to data mine (for example coding on the webpage, licensing agreement term).
The UK position is better than the EU position because it will not be possible to prohibit data mining, provided that the user has lawful access.
What about protection afforded to IP created by AI?
No changes are proposed for UK’s patent inventorship criteria (question of whether an AI can be an inventor) or copyright computer generated works (in accordance with UK copyright law, a literary, dramatic, musical or artistic work which is computer-generated can attract copyright even if there is no human author – this has been debated to be at odds with the EU position which can be argued to require human creation to attract copyright (eg. Case C-145/10 (Painer), Case C-683/17 (Cofemel)).
Microsoft revises its Responsible AI standards, restricts and retires certain capabilities
…Having regard to the fact that certain AI can be used inappropriately, Microsoft have decided to revise its Responsible AI standards and remove certain AI capabilities for use in open-ended ways.
Revised Responsible AI Standards
Microsoft’s revised Responsible AI Standards have the following overarching requirements:
- Accountability
- Impact Assessments
- Oversight of significant adverse impacts, including whether the system can be deployed for sensitive use.
- Fit for Purpose: Document in the Impact Assessment how the system provide valid solutions for the problems they are designed to solve.
- Data Governance and Management: what data will be collected and processed (labelling, cleaning, enrichment and aggregation) and how will it be used? Which geographic areas?
- Human Oversight and control: who will carry out troubleshooting, managing, operating, overseeing, and controlling the system during and after deployment? How will the system behaviour be interpreted and how will it be controlled/overridden?
- Transparency
- System intelligibility for decision making: how will the relevant system behaviour be interpreted in a way that supports informed decision making?
- Communication to stakeholders: Explain the capabilities and limitations of the AI systems to support stakeholders in making informed choices about those systems
- Disclosure of AI interaction: inform people that they are interacting with an AI system or are using a system that generates or manipulates image, audio, or video content that could falsely appear to be authentic
- Fairness
- Quality of Service: make sure the system provides a similar quality of service for identified demographic groups, including marginalized groups. [this will mean AI is not deployable for certain AI uses if there is insufficient data about a certain category or people]
- Allocation of resources and opportunities: minimize disparities in outcomes for identified demographic groups, including marginalized groups (especially when used in finance, education, employment, healthcare, housing, insurance, or social welfare)
- Minimization of stereotyping, demeaning, and erasing outputs: Applies to AI systems outputs include descriptions, depictions, or other representations of people, cultures, or society.
- Reliability & Safety
- Failures and remediations: minimize the time to remediation of predictable or known failures (define predictable failures, including false positive and false negative results for the system as a whole and how they would impact stakeholders for each intended use). How will failures be remedied, how long will take, and will there be oversight to ensure failures can be avoided?
- Ongoing monitoring, feedback, and evaluation: use the outcomes to improves
- Privacy & Security
- Inclusiveness
Restrictions on some capabilities
Microsoft is advocating for laws to regulate the use of facial recognition but in the meanwhile has decided to limit access to Azure Face API, Computer Vision, and Video Indexer to those that apply for it [Good job! Would say the Ada Lovelace Institute – see below – In the Spotlight]. Those that propose to use Microsoft’s capabilities have to demonstrate that its use will be in accordance with the above Responsible AI standards.
Separately Microsoft said it will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup, owing to the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. Microsoft also identified stereotyping, discrimination, or unfair denial of services as risks that had to be avoided. However, Microsoft is allowing some limited use, in particular to support technology for people with disabilities, such as SeeingAI.
Amazon’s AI assistant Alexa can now speak to you in the voice and style of your dead relatives
…the demo was of a child asking Alexa to read a story in the voice of his dead grandmother. There is a question of whether such a tool would impede the grieving process. There is also the question of abuse. Users could take a person’s casual voicemail (or more likely celebrities’ voices off videoclips) and convert that into Alexa’s voice without their consent. Then there are scams and deepfakes, spread of disinformation that could be facilitated by use of these types of technology. For example, anyone can use the capability to call up parents with their child’s voice seeking the transfer of money or you could make a politician say something he or she didn’t say. During a consultation on AI and IP in the UK, some voiced the need to expand the scope of performers’ rights under copyright law to address these issues. The UK government have said that the proposal was taken seriously but will be put on hold for now.
BigTech/ Data / Platforms
Google Shopping Case Mark 2? Danish Jobindex complains to the EU Commission that Google is self-preferencing Google for Jobs in breach of competition law
…Stating that the issues are similar to the Google shopping case, Jobindex, which advertises vacancies, made the following complaints:
- Google self-preferences its own service Google for Jobs over other similar services. When a search is made, Google’s job search box appears after the sponsored links, but above the organic search results.
- This is despite the fact that (in Jobindex’s opinion) Google for Jobs service is inferior to that provided by Jobindex. Google’s search results should be ranked according to their objective relevance, but it favours its own tool over others. This breaches the principle of search neutrality, argued Jobindex.
- Some of the jobs that are listed in Google for Jobs originate from Jobindex but there is no reference to Jobindex. Recruiters pay a premium to be listed in reputable sites and should not find itself listed in Google for Jobs whose listings are not always comprised of sound employers.
Google explains that it partners with job providers to help job seekers find the right employer.
In the Google Shopping case, the EU Commission found (2017) that Google violated antitrust provision by systematically giving prominence to its own shopping comparison services over third party’s comparison shopping services. The algorithm which ranks the relevance of search results were not applied to Google’s own services. Last November, the General Court of the European Union confirmed the decision of the EU Commission.
UK to abolish cookie consent pop-ups for each and every website in the long run
…so it said in its response to the consultation on the reform of the UK data protection regime. The consultation revealed that some entities (I assume a lot of them are advertisers) were unable to collect useful information whilst users found cookie consent pop ups annoying. Currently cookies for limited purposes (where essential to provide the service, where needed to transmit communications) do not require the users’ consent. The UK suggests that cookies that enable audience measurement were non intrusive and so ought to be exempted from cookie consents. Other types of cookies which collect personal data (used particularly for ad-targeting) were more intrusive and so ought to be subject to cookie consents.
The government concluded that in the future, it intends to move to an opt-out model of consent for cookies placed by websites. In practice, this would mean cookies could be set without seeking consent, but the website must give the web user clear information about how to opt out.
In the government response there was no mention of Google’s Privacy Sandbox, which is an alternative technology which enables advertisers to carry out measurement and tracking but at the same time protect user privacy. It is essentially carried out by aggregating data about conversion (into clicks, purchases) and attribution (from which ad placed on which website). The EU Commission and the UK Competition Authority are examining effects of the Privacy Sandbox on competition.
There is a concern around whether any reform will result in UK losing its adequacy status with the EU, which is necessary for businesses to be able to have free flow of personal data to and from the EU.
How do cookies work?
The main purpose of a cookie is to identify users and possibly prepare customized web pages or to save information – so that when you visit the web site for the second time, it knows your preferences. Third party cookies are where information is sent not to the site you are visiting but also others eg. advertisers on that site. See: https://policies.google.com/technologies/cookies?hl=en-US
Is it OK for a government official to block someone commenting on his Facebook page? – Depends, said a US Court (Sixth Circuit)
…James Freed had a Facebook page. He subsequently became a city manager for Port Huron, Michigan. His Facebook page became too popular and so he carried out the following:
- converted his profile to a “page,” which has unlimited “followers” instead of friends
- chose “public figure” as the page category
- updated Facebook page to reflect his new title
- In the “About” section, he most recently described himself as “Daddy to Lucy, Husband to Jessie and City Manager, Chief Administrative Officer for the citizens of Port Huron, MI.”
- Listed the Port Huron website and the City’s general email and the City Hall address as contact details.
- Posted a mixture of private and public matters
Lindke didn’t approve of how Freed was handling the pandemic and started responding with criticism. Freed blocked Lindke which led him to sue Freed, claiming this blocking violated Lindke’s First Amendment rights.
The US Court sided with Freed saying that the Facebook page was operated in his personal capacity:
- Freed not duty-bound to have a Facebook page
- Facebook page did not belong to the office of city manager – It wouldn’t make sense for Freed’s successor to take over that page
- Government doesn’t employ anyone to operate the page
- No official account directed users to the Facebook page
- The office had no control over the Facebook page
These facts distinguished that of the Second Circuit in Knight First Amendment Institute At Columbia University v. Trump – in that case the plaintiffs succeeded in showing that Trump had violated the First Amendment by blocking users. That Court had held that “While he is certainly not required to listen, once he opens up the interactive features of his account to the public at large he is not entitled to censor selected users because they express views with which he disagrees”
Metaverse
First ever law firm in the Metaverse and issues with it
…I was alerted by this article that a personal injury firm was first set up in the metaverse Decentraland back in December 2021. Given that those donning VR have been known to have injuries – for failing to take account of an obstacle / set of stairs in the real world, it might come across as quite apt – the article says.
Nice though the idea may be, the author talks about some of the regulatory issues that need to be considered:
- How secure will the correspondence be between the lawyer and the client?
- How do you carry out identity checks?
- Where will that data be stored?
- Can that data be deleted (right to be forgotten provided for in data protection laws in some jurisdictions, such as Europe)?
These are issues that need to be accounted for, but soon the technology should catch up to enable law firms to conduct their practice and abide by various regulatory requirements, the article concludes.
In In the Spotlight
Democrats ask US FTC to investigate Apple and Google over transforming online advertising into an “intense system of surveillance” that incentivises unrestrained collection of data from their mobile platforms – in anticipation of the overturning of Roe v Wade
…The letter was written in anticipation of Roe v Wade being overturned.
It all concerns the incorporation of unique tracking identifiers into iOS and Android for ad-targeting purposes. The unique tracking identifiers enable Apple and Google to understand what users do on their mobile phone (eg. what websites do they browse? What sort of questions to they search on Google/ Safari (which uses a Google search engine)? What do they purchase from which website? What in-app purchases do they make?)
- Apple: Until recently, Apple allowed users to opt out, but only through a complicated procedure. Apple now makes it easy for users to opt out. However because this means that third parties cannot track the user, leaving Apple only to be able to exploit the data it presents an antitrust issue (German Competition Authority presently probing). To be precise, as I understand it, users can also opt out of Apple collecting personal data too, but that option is presented to the user in a different way, as addressed in Facebook’s comment letter. Facebook says this is unfair.
- Google: Until recently, users were not able to opt out, and still currently enables tracking by default. Whilst it is possible now for users to opt out, it is a complicated process.
What do unique tracking identifiers enable?
The unique tracking identifiers are not anonymous, and can be used to identify the relevant individual – for example, it is easy to identify which residential address the identifier is associated with by looking at the location data for the identifier in the night-time. In fact, some data brokers have a dataset of unique tracking identifiers linked to personal details of the individual it represents (name, email address, address, telephone numbers etc). The letter says “Apple and Google enabled governments and private actors to exploit advertising tracking systems for their own surveillance and exposed hundreds of millions of Americans to serious privacy harms”.
What’s Roe v Wade got to do with it?
The letter notes that “Data brokers are already selling, licensing, and sharing the location information of people that visit abortion providers to anyone with a credit card. Prosecutors in states where abortion becomes illegal will soon be able to obtain warrants for location information about anyone who has visited an abortion provider. Private actors will also be incentivized by state bounty laws to hunt down women who have obtained or are seeking an abortion by accessing location information through shady data brokers”.
Are the Democrats over-reacting?
It is true that tech companies know users well very well, because they know so much about our private lives, what we do, where we visit, what we buy, what we like, what predilections they have. A high-ranking Catholic priest has in the past been outed as gay when the Catholic news media purchased commercially available location data and worked out that the priest’s phone was used to visit gay bars and private residences whilst using the gay dating up Grindr. When the department store Target used its own customer data to send targeted ads, it disclosed to a man that his teenage daughter was pregnant unbeknownst to him; the man had gone to complain to Target about bombarding his daughter with ads about baby items, apparently encouraging her to fall pregnant despite the fact she was still at high school.
When there are such incidents, it is not possible to say that Democrats’ point is farfetched. In the US, the government has the power to compel companies to turn over data under their control – which is why the top Court in the EU (CJEU) ruled that sending personal data to the US contravened GDPR. FTC has already raised the possibility that emerging technology, such as AI could incentivise surveillance. As the chances of the US government could one day adopt an autocratic regime is not nil, remote though it may seem, one can’t help thinking that it would be prudent to consider these issues to future proof citizens’ rights.
Around the world
UK’s independent legal review commissioned by the Ada Lovelace Institute concluded that technologically neutral framework is needed, so that emerging technologies can be used in a way that is that is “responsible, trustworthy and proportionate”. In that review, it advised that the use of live facial recognition which compares the biometric data to the database or records ought to be banned immediately until biometric technologies are properly regulated.
What can users do?
There are apps which offer end-to-end encryption for reproductive services such as menstrual cycle tracking. There are also VPN apps you can download to safeguard your location data. That way, users’ information will be safe, for example, were there to be a cybersecurity breach/attack, or were that state to decide to prosecute businesses that may have information on women who might have illegally obtained an abortion (because their menstrual cycle ceases and then resumes before term) to turn over data, they will be unable to do so because not such data would be within their control.