Latest posts by Sang Nkhwazi (see all)
- 6 Cost-effective ways a Tech Start Up can use to find Customers - March 29, 2019
- Next Big Town: Attracting Talent and New Businesses to Stockport - February 26, 2019
- Germany’s Federal Cartel Office orders Facebook to stop collecting user data from multiple sources - February 7, 2019
Unless you have been living in some remote cave the last few years, you must have noticed that there’s been a lot of talk about Artificial Intelligence (AI) lately. That’s not to say that you can’t find a Wi-Fi connection in a cave (It turns out you can), but amongst the hot topics last year, AI surely was up there with the likes of global warming, North Korea and the happenings of the Trump White House.
AI Talent attracts top dollar
2017 happened to be the year in which we learned that Tech companies across Silicon Valley and elsewhere were doling out upwards of US$300,000+ a year for top AI talent and machine learning “experts”, and that the pool of knowledge included University professors who seemingly abandoned their teaching roles for a piece of Silicon Valley.
It was also the year in which tech luminaires including Elon Musk warned in a letter to the UN of an “arms race” in autonomous weapons development:
‘..Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. …
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. … It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators … warlords wishing to perpetrate ethnic cleansing.…’
Businesses having an interest in the sector should be asking the question: ‘What IP implications will AI throw up?’ and ‘How can we take advantage of the commercial benefits that AI provides?’
There are many legal minds who have attempted to answer these same questions. And many have provided insightful and well-meaning answers and guidance, among them Bradford K. Newman, a partner at Paul Hastings LLP (who is calling for “The Artificial Intelligence Data Protection Act” (AIDPA)” to be enacted by the US Congress – no doubt a development that would influence the UN if it were enacted). But as is the case with most new things, the law often has to play catch up, in order to define, characterise and embody such new frontiers into established legislative frameworks, leaving plenty of room for creative guess work in the intervening period.
AI is not new. The first work that is generally recognised as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”. This was at a time when discussions about AI were clouded with science fiction speculation, of a dystopian era in which evil robots enslaved mankind. But it wasn’t until over a decade later, at a workshop in 1956, that AI research began at Dartmouth College. Today, although its adoption and growth into everyday technology has been slow – partly due to a series of technological challenges, among them the inherent limitations about what an AI system can and cannot do (for example, our inability to teach AI to understand issues of morality) – the conversation is a lot more serious.
There are many advantages which AI will bring to society. From helping humans deal with some of the pressing problems facing the world, to providing convenience and improving decision making, AI has many applications. Think autonomous vehicles – reducing road accidents, tackling inequality, poverty, climate change, and helping solve the debt problem created by the global financial crisis. Further in the medical sector, with developments in brain implants – which recent research shows can boost performance on memory tests by up to 30 per cent, an AI system working with such implants could help form or restore long-term memory in people whose brains have suffered damage from Alzheimer’s, stroke, or injury.
There are also commercial opportunities in AI, from reduced labour costs to efficiencies in getting work done at a phenomenal rate than any human would be capable of. Already companies that have been on the forefront of AI development are reaping the rewards; last September, Reuters reported that Nvidia’s shares had increased 715%+ in value from a mere $23.30 two years earlier, largely as a result of the company’s diversification into AI (Also see this link on Fortune.com). Surely this should serve as a credible indication of what is to come?
From an IP perspective, none of the IP frameworks available in most countries today appear adequate to sufficiently protect (or cater to) AI systems. For example if an AI system were to be protected by trade secrets, there is always the risk that someone will leak that trade secret to a competitor, perhaps in exchange of money. The competitor could then learn the source code and create an improved AI system undercutting the owners of the original system. Similarly, confidentiality agreements also are insufficient since a company to whom one implementation of a technology has been disclosed may be inspired into creating a different and improved implementation of the AI system; one that does not breach the confidentiality agreement, but is nevertheless only in existence because of the information which was disclosed. Patents too cannot be used in certain circumstances, for example if the invention is a known AI software package or system applied to a known problem without any additional technical effect or technical contribution other than using the “out-of-the-box” functionality provided by the AI software package. But since AI is arguably still in its infancy, at least as far as development and widespread adoption are concerned, it is difficult to predict accurately what it’s nature and evolution will be the next 10 to 20 years, and what further issues will arise – making the job of ascertaining the different IP implications more difficult.
The internet came of age when most of the IP laws across the world had been firmly established. It was responsible for disrupting several industries, among them the music CD and print newspaper industries. With AI, we are being told that it will rapidly replace human labour, quickly gaining the ability to self-evolve – improving itself and making intelligent decisions based on rapid analysis of data gathered from billions of records (as opposed to commands from a human being). As such it is inevitable that there will be implications which these intelligent and autonomous machines will throw up.
Security & Liability
Safety and security always rank high when it comes to weighing the benefits of any new technology. In a world where Cyber security breaches cost UK companies billions (In 2016, £29 billion) Cyber security is increasingly gaining importance as a major consideration, even for companies operating in the legal sector. Thus, any ‘AI creation’ needs to be safe, or at least have an enterprise level of security embedded within it, for consumers to feel safe and for businesses to be reassured that it will not introduce a risk beyond normally acceptable levels of risks which are part of the running of any business. There must also be liability and avenues for redress in the event that an AI system’s actions have led to loss or damage. Finally, there have been suggestions about the incorporation of deactivation buttons or “kill switches” to neutralize an AI system in the event of a malfunction.
But what happens when an AI system goes rogue (not necessarily intentionally – perhaps even as a result of a previously unspotted flaw in the code) and no one is immediately available to deactivate it? Not in the sense as say the Chinese developed chatbots BabyQ and XiaoBing – one of which was pulled off Tencent’s messaging app QQ, as reported by the Financial Times – after the chatbots answered incorrectly on whether they liked the Chinese Communist Party, and the other said moving to the US was its dream. Not that kind of rogue; instead something a lot more serious.
Further, what happens if a malfunction leads to loss of confidential information (trade secrets / commercially sensitive data) or personal data? Here it must be pointed out that loss of commercially sensitive data can mean losing a competitive edge; and losing a competitive edge for a company that depends on IP can be devastating and can be the difference between healthy profits and bankruptcy. Further, loss of personal data could compromise people’s safety, and invariably lead to fines being imposed on a company by the data protection authorities. Thus, in such circumstances, how will causation be established? What if it is the case that the AI engine in question acted on its own accord? As an example, consider the scenario where an AI system decides that selling personal data (probably to a different AI system that was shopping for such data for marketing purposes) was the best way for its developer company to generate a larger profit. In this case, the AI system makes what it considers to be the right commercial decision, but which unfortunately was a bad judgement, a contravention of data protection laws and corporate ethics? Should the creators of the AI engine be punished with hefty fines, or be subject to criminal offences (under Data Protection Act 1998) for a mistake that was truly and genuinely not of their doing? Or will the machine be liable? How so, when the law doesn’t recognise its existence? It’s easy to see why laws governing confidential information, including IP laws, would need to be updated to reflect such type of scenarios.
Inventorship / Ownership – can a machine be an inventor?
Who owns a new computer program that has been autonomously created by an AI system running on a super-computer? The AI system which created it, or the company which owns or has the rights to the AI system? What about if there is open source software that has been utilised extensively in creating the new computer program? How will claims that the new code is freeware be dealt with, and where will the threshold be drawn? Furthermore, what if it were the case that the created code is designed especially to run within a surgical tool that is used to conduct medical procedures, and that together with the new code, performs such medical procedures better than any known man-made software-tool combination; should such a surgical tool be patentable as a software implemented invention? If so, who is the inventor?
These question are important for many reasons, not least because currently in the UK (and I suspect in most parts of the world), an inventor must actually be a natural person, and cannot be a machine, or a company. Section 7 (3) of the Patents Act 1977 states that:
In this Act “inventor” in relation to an invention means the actual deviser of the invention and “joint inventor” shall be construed accordingly.
Further, various cases, among them University of Southampton’s Applications  RPC 11, Stanelco Fibre Optics Ltd’s Applications  RPC 15 and Statoil ASA v University of Southampton (BL O/204/05) have provided guidance on determining who the inventor is, and in all cases the inventor is always a natural person.
Thus, if it cannot be agreed under the current law who the inventor is, does that mean that the surgical tool cannot be protected by a Patent, even when it clearly is a useful innovation better than anything else that precedes it? Or would the law need to be amended to either enable machines to be inventors, or allow senior ranking staff of companies that made the AI machines to be inventors, even when an invention has been autonomously created by a AI machine? Similar questions would arise if an AI system develops sufficiently to not only create software, but using 3-D printing, it creates novel physical 3D objects that ordinarily may be protectable by Registered Designs.
A further scenario can be found in polymer physics and nanoelectronics, when it is desirable to create composite materials with novel properties that are potentially greater than the sum properties of the individual materials. For example, to create a thin-film transistor (See this Nature Magazine article published in December 2014 for more information) using graphene in a lab, a scientist may need to fabricate vertical heterostructures. The process may involve the time consuming practice of manually peeling off graphene layers from a growth substrate, or exfoliating single-crystal flakes from a block, followed by arranging the layers one on top of the other above a Silicon dioxide substrate. Terminals or contacts are then placed at the right junctures to form a gate. All this is time consuming and can take weeks to get right. Mistakes are not uncommon.
An AI system can be trained to identify suitable mono-layers, exfoliate or peel off the desired materials, arrange them together with a dielectric such as hexagonal Boron nitride in the pre-specified order, and solder the electrical contacts in place. It can create the gates, test them, and discard those that have defects or do not work as desired. It can try different layouts and experiment with arrangements which were previously not thought of – the whole cycle taking minutes or a couple of hours (instead of weeks), and being undertaken with higher accuracy and more efficiency.
If a particularly useful logic gate were to be made by such an AI system, it would also be interesting to see whether a patent based on such an invention which was clearly created by AI would be declared invalid, once it is known that the “inventor” is in fact not human.
Two years ago, University College London announced that its scientists in the department of computer science had created a software program that can weigh up evidence in court, and review moral questions of right and wrong to predict the outcome of trials, essentially an AI “Judge”
The article said:
“The AI “judge” has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy.”
While there is no doubt that this is an impressive development, suppose such a program were to be used as part of a particularly difficult Trade Marks infringement case where the facts were unclear, to try to ascertain from the evidence available whether the alleged infringer had in fact infringed the trade mark in question. What happens if the AI adjudicator gets something wrong? What redress will be available to the aggrieved party, considering the mistake may lead to irreparable damage like loss of revenues or lost profits? How will such errors of judgement (not least the unforeseen consequences) be remedied or avoided in the future?
Given that most legal systems have an appeal system, AI judges if used for decisions of first instance, have the potential to simplify the handling of low value commercial cases such as small claims cases of less than a few thousand pounds, where the current cost of legal representation and the court system is disproportionately large relative to the amount in dispute. The potential is for possible judiciary backed online dispute resolutions involving AI generated reasoned judgements, without the need for a court appearance in person.
AI judgement systems could also find application in domain name disputes. Nominet already accepts online submissions from parties in domain disputes and there is the potential for automated dispute resolution all the way through using the existing user interface, and an AI judge to give reasoned and considered automated domain name dispute decisions.
But such interventions would need to be embodied in a legislative framework.
Databases & Broadcasts
Copyright protects the expression of ideas. It gives the owner a number of rights including the right to stop or prevent reproduction of their work into another medium without the owner’s consent. It also gives the owner other rights known as ‘moral rights’, among them the right to be recognized as the author or creator of a work. Thus, copyright is probably the most obvious form of intellectual property protection for artificial intelligence, since source code is a literary work within the meaning of the Copyright Designs and Patents Act 1988. However, here too several problems exist:
If a database generated by an AI system that is capable of evolving was developed by one company (the developer) is used by another – a customer of the first company, does the resulting IP belong to the developer or to the customer?
Traditionally, copyright has been used to stipulate ownership in these kinds of business relationships, however, if the customer was active in improving the AI system, via machine learning, where the AI system learned from the employees of the customer, thereby embedding the “know how” of the customer – without the input of the developer or their software engineers, the customer may have a justifiable claim to the confidential information or “know how” in the database whereas the developer could retain the confidential information or “know how” embedded in the AI programming as licensed to the customer, in addition to any copyright in the code. But the boundaries of where one right ends and another starts may be blurred, especially if the AI system changes its code as it learns.
Similarly, AI systems are now sufficiently developed to create content such as online articles, reports or even award winning books (nearly). AI powered speech synthesis can closely mimic the voice of a person to incredible accuracy. It is not inconceivable that soon enough some AI systems will be capable of creating video content (perhaps from a collection of video data), and broadcast such content autonomously – a world away from The Rome Convention that set the international reference points for broadcasters’ IP, when closed-border, black and white analog broadcasting was the norm. When that begins to happens, it will not be possible to use current IP laws to adequately define or protect such creations. Further, Section 6 (1A) (c) of the Copyright, Designs and Patents Act 1988 says that
(1A) Excepted from the definition of “broadcast” is any internet transmission unless it is— …
(c) a transmission of recorded moving images or sounds forming part of a programme service offered by the person responsible for making the transmission, being a service in which programmes are transmitted at scheduled times determined by that person.
Will a transmission be excluded from recognition as a broadcast if it is no longer a person offering the transmission or arranging the scheduling?
If the material being broadcasted turned out to be pirated content, can an injunction be served to an AI system? And would it be reasonable to expect the creators of the system to accept criminal liability, when they weren’t responsible for its behaviour, and there was no reason to predict that it would find and broadcast pirated content?
Further, the next clause of the Copyright, Designs and Patents Act 1988 (Section 6(2)) reads:
An encrypted transmission shall be regarded as capable of being lawfully received by members of the public only if decoding equipment has been made available to members of the public by or with the authority of the person making the transmission or the person providing the contents of the transmission
If one finds a way of decoding an AI created transmission, without the authority of the AI or the company behind it, are they breaking the law?
Thus, it seems to us that there are many uncertainties as to how the law will treat AI. But it’s undeniable that the future regulation of AI systems must include a comprehensive legal framework that addresses the various issues and potential challenges that will arise. Nothing less will do.