Tech Pick of the Week
EU’s Top Court says Article 17 Directive on Copyright in the Digital Single Market which makes platforms liable for infringing materials (unless best efforts made) not unlawful
…A representative of Google at a conference once said let’s face it, Article 17 was targeted at YouTube. This provision says, in a nutshell, that online-sharing service providers must make “best efforts” to ensure that copyright infringing materials on their platforms are minimised. The bigger the platform, the more effort (ie: resources) must be expended (per the principle of proportionality), though there is no general monitoring obligation.
Poland objected, saying the law conflicted with the freedom of expression and information as guaranteed under Article 11 of the Charter of Fundamental Rights of the EU. Court of Justice of the EU held that there has to be a fair balance between that and right to intellectual property, protected by Article 17(2) of the Charter. In particular, any measures placed by YouTube (or any other online-sharing service provider) must not result in the unavailability of lawful material.
Note 1: As an example, YouTube uses automatic copyright filter called Content ID. This makes it easier for YouTube to automatically capture potentially copyright infringing material that is being uploaded. No doubt YouTube (Google) will say that the provision of this automatic copyright filtering demonstrates that it is making “best efforts”. Critics say that YouTube’s use of Content ID is unbalanced because it will also filter out use of copyright protected material which will qualify for US’ “fair use”, meaning it will be legal. There are also exceptions under EU law as well (see Article 5 of Infosoc Directive), although they are much more circumscribed – having said this there is a risk that YouTube’s automatic copyright filter will prevent lawful use of copyright protected material. This will fall within the sort of complaint raised by Poland.
Note 2: Article 17 appears to work well with the proposed Digital Services Act which stipulates that online intermediaries need to enable users to flag unlawful material for the platform to deal.
BigTech
Twitter reassures advertisers that toxicity on the platform will be controlled
…In preparation for the take over by free speech absolutist Elon Musk, many fear that toxicity (ie: trolling, brigading (co-ordinated campaigns sometimes using bots), doxing (publishing malicious private info) may increase in the name of free speech. Twitter is particularly vulnerable to toxic behaviours because anyone can respond to anyone (though you can block unwanted followers/responders). On the other hand, Musk is expected to clamp down on bots which is said to contribute to toxicity on Twitter (this would decrease the level of daily active users, but that presumably will matter less once he takes the company private).
What Twitter wants to avoid is the reprise of Stop Hate for Profit campaign against Facebook mounted by big brands such as Unilever, Ford and Coca-Cola which threatened to pull ad spending on Facebook owing to its alleged failure to tackle hate speech. Compared to Facebook, Twitter is particularly vulnerable because its advertising revenues predominantly come from large corporations, whereas Facebook will have small to medium size companies in its books (this is because Twitter is more about raising profile, Facebook more about very targeted advertising, increasing the proportion of people who will click on adverts and convert into a purchase).
Separately some EV companies have other concerns – as their marketing strategies may leak to Tesla. Henrik Fisker, the CEO of EV maker Fisker has deleted his Twitter account and asked all to follow him on Instagram instead. Others, such as General Motors are sitting on the fence for now.
Biden administration sets up Counter-Disinformation Board
… With a focus on preventing spread of disinformation from Russia among other things. One would need to ensure – one assumes – that there are proper checks and balances to ensure that information is not being filtered out depending on the politics underlying the messages, and that the board is not being used to benefit whichever political party is in power.
Is this in part a response to Musk’s Twitter takeover? If so, this is a rather speedy response.
Apple charged for antitrust violation by the EU Commission in its preliminary findings over third parties from providing mobile wallet system
..The bit that is subject to the charge is the use of NFC (Near Field Communication) to enable iPhone users to pay by tapping on the merchant’s device. Apple’s iPhones have a chip in it to enable NFC – but Apple has this chip locked in with Apple’s wallet app. You can of course have your credit cards in the wallets – enabling banks etc to have access to the embedded NFC technology, but because the chip is locked into Apple’s wallet system and enables to Apple to take a commission on the transactions. EU Commission is saying that Apple should enable third parties to provide non-Apple wallets. Apple say that the ecosystem is structured in the interests of security and privacy. In any event, suppose Apple were made to allow other mobile wallets – users are going to find it easier to use Apple’s mobile wallets anyway, because their ID and other vital information is locked into the Apple ecosystem.
Apple is charged with two other antitrust breaches by the EU Commission:
Spotify has challenged the 30% commission fee + prohibition on using its own payment system: This time last year, the Commission sent a Statement of Objections to Apple, taking a preliminary view that Apple has abused its dominant position. Spotify’s side of the story is posted on a website called Time to Play Fair.
Similar to Spotify, but relates to eBooks.
If found to be infringing, Apple could face up to 10% of global revenues… how much would that be? Read on…
The next post is along the same theme, about an instance where Apple kicked a certain app out of the App Store…
Apple exonerated from excluding an App by a Californian federal judge
…A currency exchange app developer Konverti sued Apple for excluding it from the App Store claiming it breached competition law. The Konverti app was designed to enable people wanting to exchange currency to meet up in person (eg. US dollar to Pounds Sterling). Apple had excluded Konverti because it saw that there was scope for abuse and danger, with encounters possibly leading to money laundering, fraud, counterfeit currency trading and other financial crimes. On this occasion the claim was dismissed because it lacked specific pleading as to why Apple’s conduct harmed competition or why Apple’s decisions were arbitrary, among other things.
Californian court dismisses claim against Facebook for displaying scammer’s ad
…The judge so held as he found that Facebook did not do more than just publish the ad. The Plaintiff had sued Facebook on the grounds of negligence, breach of contract, breach of covenant of good faith and fair dealing, and California’s unfair competition law.
Facebook did not contribute to the furtherance of the scam decided the judge. When users complained against one ad publisher, Facebook removed it having decided that the ad violated its policy. However, when the scammer re-posted the same ad but under a different modified name, it was not prevented. The judge gave leave to amend the pleadings in case the plaintiff can plead facts which demonstrate that Facebook did something more than to merely publish an ad (eg. promote the ad) that happened to have deceptive intent.
Now this case is in the US. But in the EU, the proposed Digital Services Act is likely to come into play. Suppose the ad was unlawful. Facebook may have to show that it made sufficient efforts to minimise unlawful content from being displayed again.
Apple posts record breaking revenue of nearly $100 billion in the most recent quarterly earnings report
It is made up of:
50% iPhones
10% Macs
20% Services – Apps, iCloud+, Music, Apple TV+, Apple Arcade (gaming), Apple Fitness+, Apple News+
Remainder: iPad, Wearables, home and Accessories.
The Services business is growing fast. That has to be the way business is moving. One can easily understand this, when you think about the number of users that are using Apple pay (see above), and commission from in-app purchases. No wonder Apple has launched an iPhone at a lower price point. When one thinks about that, it is quite surprising that the proportion of income from services isn’t greater.
Japanese Government considers reigning in BigTechs
…Japan is considering the following:
One issue is that users can’t sue US based companies that easily. Big IT companies have been asked to register their HQ entity in Japan to enable Japanese users who have suffered harm on the platform to sue in the home jurisdiction.
Prohibition against pre-installed apps on iOS and Android phones.
Mandated provision of multiple app stores.
The latter two potential changes will concern Apple and Google in the main. Both of these are issues raised in the US and EU.
Reseller accuses Cisco and its preferred distributor in Texas for forcing SMEs to buy new network equipment in breach of competition law
…Network equipment re-seller Dexon says it is using FUD, or Fear Uncertainty and Doubt coercing SMEs to buy new expensive equipment and refusing to service them, when they want a software update. Cisco deters customers from buying (cheaper) equipment from re-sellers by claiming risks such as malware or spyware the suit says.
Amazon and its C-suites sued for breaching biometric privacy laws in Illinois
…Informed consent and certain information needs to be given before collection, use and storage of biometric information under Illinois Biometric Information Privacy Act. The Complaint says the governance and internal procedures to comply with the regulation is inadequate. The complaint concerns collection of facial data uploaded to Amazon’s photo storage service and features that allow shoppers to virtually try on make up and clothing. The interesting point is that Amazon directors are also sued, which will help pressurise Amazon to settle.
EV
Beijing grants robotaxi licences to Baidu and Pony.Ai
…Baidu (roughly, China’s answer to Google – which has partnered with state owned BAIC group for the autonomous driving venture Apollo) and Pony.Ai (backed by Toyota) can now operate robotaxis around Beijing, meaning the public can now embark. A person must be in the vehicle but not necessarily behind the wheel. Costings, performance and take up by the public would be interesting.
Metaverse
Bored Ape backers to launch the Otherside metaverse game – punters rush to buy land on metaverse
…This time it is real. Last week Bored Ape NFTs were stolen following a hack of Bored Ape official Instagram account announcing the opportunity to claim land on the upcoming Bored Ape’s metaverse, the Otherside.
The launch of the Otherside could set light to more activity on the metaverse, generally. The main metaverse games (meaning games on a blockchain) such as Decentraland and Sandbox reportedly only have a few regular users. This game, which features the most high profile NFT collection, could mark the turning point, and the key to success must rest on how entertaining it will turn out to be. Virtual land sale was launched with the game by venture capital Andreessen Horowitz backed Yuga Labs – in partnership with Animoca Brands, betting big on potential purchasers who might want to get on the speculative bandwagon. Such moves were a-plenty; the Ethereum blockchain crashed as a result of the (virtual) landgrab frenzy which ensued. Bored Apes as well as other collectors’ NFT series, such as Cool Cats and World of Women can appear on the Otherside. ApeCoins (now tradable on crypto exchanges) will be used on the platform.
Note that the Sandbox is owned by Animoca who also has a stake in Decentraland. You can foresee then that these worlds might eventually get more integrated. Meta is probably keeping a watchful eye.
Telecoms
Mobile Virtual Network Operator Mint Mobile may be on the hook for customer’s loss from crypto hack
…Mobile Virtual Network Operator (Provides telecom services by leasing wireless capacity from main carriers to provide services which are usually cheaper than the main carrier) Mint Mobile was sued for negligence by a customer whose crypto currency worth nearly $500k was stolen following a data breach.
The customer sued Mint Mobile on the basis that its data breach occurred just before the user’s SIM was hijacked (sometimes called SIM port out, SIM swapping – basically transferring your mobile phone number which can be done by third parties using the original owner’s personal data). Customer alleged that crypto hack was enabled as a result of the data breach which disclosed personal information needed to carry out the hack.
Mint Mobile moved to strike out the claim but a Californian federal judge ruled that the Plaintiff had pled the facts sufficiently, duty of care was established and the damage was foreseeable and the case should proceed, given discovery of documents had not yet taken place. Mint Mobile’s argument that it couldn’t possibly be responsible for an independent third party’s illegal acts was struck down.
EU Telecoms companies say BigTechs should contribute to infrastructure spend
…The rationale is that streaming and social media companies contribute to about 55% of all traffic on mobile and broadband networks. This costs the EU telecoms companies (Vodafone, Orange, Deutsche Telekom, Telefónica etc) around €15-28 billion each year, it has been reported. It is an acute issue for EU Telecoms companies at present who need cash to prepare for 5G and full fibre rollout. It may well be that, if the infrastructure cost is too much for the EU Telecoms businesses, then the 5G roll out may well become that much slower – indeed over in the US, Telecoms companies are instead updating the old copper network because it is just too costly to do a full upgrade.
Telecoms companies point to the case of SK Telecom of South Korea which successfully sued Netflix because they were compelled to upgrade the network owing to the very popular Squid Games viewing traffic.
In the Spotlight
UK’s Digital Regulation Cooperation Forum (DCRF) publishes calls for views on use of Algorithms – Revealing the kinds of regulatory requirements which may be being considered for the future
…The DCRF is an initiative derived from the following 4 digital watchdogs working together:
- Competition and Markets Authority (CMA)
- Financial Conduct Authority
- Information Commissioner’s Office (ICO)
- The Offices of Communications (OfCom)
Calls for Views 1:benefits and risks of how sites and apps use algorithms
Why the need for Call for Views? : Modern Machine Learning and Artificial Intelligence approaches could cause harm (give rise to misrepresentation, distortion of competition, amplification of biases leading to discrimination/inequalities, harm people’s right to privacy) if not used without care. Algorithmic processing is opaque and lack accountability.
What do the Regulators want?: Increase in understanding of the nature and severity of risks so that they can help businesses use algorithms in a responsible manner. The Regulators can provide meaningful guidance / mandate new rules / regulate businesses not acting responsibly, whilst at the same time promoting effective competition, resilient infrastructure and systems which protect individuals from harm.
There are 6 Common Areas of Focus: Stakeholders are invited to comment on DCRF’s areas of focus, and alert the DCRF on what other issues ought to be considered, and ideas on how DCRF might be able to assist individuals and consumers to navigate the algorithmic processing ecosystem in a way that serves their interests.
- Transparency of algorithmic processing [most important]: Needed to ensure users know when their rights are being infringed / provide informed consent / ensure fair treatment
For example:
- Providing information to users as to when data is being collected, how it is processed and for what purposes. Are users to which the data belongs, giving informed consent?
- Providing information on algorithimic processing carried out. For example, what training data used, is a human in the loop? What protocols govern the processing?
- Providing explanation of any decisions made
- Vendors of algorithmic systems (includes sellers of “off the shelf” algorithms) should also inform customers of the limitations and risks associated with their products
- Making clear who is responsible for inappropriate algorithmic processing
- Examples of lack of transparency:
- Users who might be facing higher prices compared to others with a similar profile
- Users who don’t know how their data is being used for targeting advertisement purposes
- Users who are not explained why they have a poor credit score
- Fairness for individuals affected by algorithmic processing: Need to ensure there is trust of consumers and citizens / Ensure compliance with the Equality Act
For example:
- Ensure training data does not embody any bias
- Remove information about sensitive characteristics
- Consider fairness of personalised pricing – is it fair that businesses work out how much you would afford to buy a particular product and offer that price? Is it fair that those that live in areas with higher incidents of burglary should pay more for say, home insurance?
- Access to information, products, services and rights
For example:
- Limiting exposure to alternative viewpoints – control over algorithims that result in exposing certain users to harmful content (those that incite violence, antivaxx conspiracy theories) on a repeated basis
- Limiting exposure to economic opportunities – as an example, females may be less exposed to STEM based job opportunities. Controls to be placed to avoid such outcomes.
- Resilience of infrastructure and algorithmic systems
For example:
- Ensure datasets on which the AI is trained cannot fall over if a bad actor were to poison the training data
- Ensure personal data cannot be inferred from the training datasets
- Resilience against cyberattacks
- Individual autonomy for informed decision-making and participating in the economy
For example:
- Ensure that targeting does not amount to manipulation leading to users making decisions that they otherwise would not make. Some recommender systems might fall foul of this.
- Some users are more vulnerable than others to targeted advances by service providers (eg. children, people with learning disabilities, the elderly, those with addictions) – they would need to be safeguarded
- Avoidance of harmful choice architectures – eg. making it difficult for users to unsubscribe, default options
- Provision of option of users not to be targeted based on their profile
- Healthy competition to foster innovation and better outcomes for consumers
For example:
- Making sure that platforms do not have an unfair advantage by recommending their own products over third parties’ (so-called Self-preferencing)
- Making clear which links have been up-ranked (eg. those that are sponsored)
- Ensuring that connected algorithmic systems operate fairly – and for example, don’t lead to tacit collusion such as on price
- Preventing organisations with data power from accumulating granular information on individuals across their online journey that they then use to personalise their offerings, which can exacerbate information asymmetry between consumers and service providers.
The above summarises just some of the issues of algorithmic processing that were raised. I think we would all benefit from having at least a superficial understanding of these, as consumers and as business persons.
Calls for Views 2: auditing algorithms, the current landscape and the role of regulators
The DCRF notes that “while algorithmic auditing is currently not conducted extensively and the market for audits is relatively immature, an audit ecosystem will likely grow in the coming years”. This means that businesses that deploy algorithmic processing ought to set up their systems with an eye to possible audit obligations, and ensure robust governance is in place to enable regulators to clear their systems.
Indeed, the paper notes that a number of algorithimic audits have so far taken place:
- The CMA has been investigating Amazon and Google over concerns they have not done enough to tackle fake and misleading reviews on their sites including by examining their review moderation and product rating systems
- The ICO and its Australian counterpart investigated Clearview AI Inc’s facial recognition technology due to suspected breaches of UK and Australian data protection laws
- The ICO investigated the use of data analytics and personalised microtargeting in political campaigns in 2017
- The Australian Competition and Consumer Commission inspected Trivago’s algorithms, revealing that the ranking of hotels was weighted towards those hotels that paid higher commissions to Trivago, rather than those providing the cheapest rates available to consumers
The DCRF paper goes into significant detail about the issues for auditing, level of auditing and the process of auditing, existing landscape and thoughts on future landscapes for algorithmic auditing.