A new report of artificial intelligence claims that the rapidly developing technology could pose a threat to established systems of government and commerce
While the technical ramifications of AI have been speculated on for years, the legal implications and complications are only just emerging, and could leave us vulnerable to new types of crimes that have never been anticipated, leaving legal experts to try to navigate unchartered waters.
Within five years we could be seeing entirely new types of cybercrime, hacking of complex or vital systems, or even political disruption through fake broadcasts, known as ‘deep fakes.’ Politician’s voices could even be faked, or manipulated in ways that would be hard to disprove.
These crimes would need entirely new laws that can only be developed and implemented by forward-thinking groups. Close relationships between those who are developing the technology and those within the legal community will help to keep us safe.
Elon Musk, founder of Tesla and SpaceX, and co-founder of the OpenAI research group, has been quoted as saying that AI poses ‘our biggest existential threat.’ He added ‘that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.’
New Protection Laws
The report, which was written by 36 authors from 14 institutions, spanning academia, civil society, and industry calls for policy makers and technical experts to work together, helping each other to create new laws that can proactively protect us from a technology that could learn to overcome our current defences.
Lawyers specialised in technology are already thinking about solutions to the AI problem, like Clive Halperin at GSC Solicitors, who advises businesses on how best to keep up with ever-changing technology.
Clive Halperin from GSC Solicitors stated that: “AI techniques may help in cracking passwords and other types of cyber security in a way which might take humans many millions of man hours to actually do, and the risk is that artificial intelligence techniques will enable that to be achieved much more easily and with much more dangerous consequences.”
Clive is already looking into such legal quandaries such as: who would be liable in the event of a crash involving a driverless car, and how to regulate what AI can be used for that will benefit people’s busy lives.
Editor's Recommended Articles
Must Read >> Artificial intelligence receives £17m funding boost
Must Read >> A look back – and ahead – for public sector IT
Must Read >> Rise of the robots in the public sector