Banning scary-sounding ideas can comfort but does more harm than good

by Admin
Abstract image of the AI brain

Yuichiro Chino/Getty Images

In 1818, Mary Shelley invented a technology that has been used for both good and ill in the centuries since. It is called science fiction.

You may not think a literary genre counts as technology, but sci-fi stories have long been tools for predicting and critiquing science. Shelley’s Frankenstein, regarded by many as the first true sci-fi novel, was powerful enough for South Africa to ban it in 1955. It set the formula with a tale that serves even today as a warning of unintended consequences.

The precise science employed by the eponymous Victor Frankenstein in his creation isn’t, as far as we know, possible. But researchers today are able to bring dead human brains back to something resembling life. Experiments are under way to resume cellular activity (but, crucially, not consciousness) after death to test the effects of treatments for the likes of Alzheimer’s disease (see “The radical treatments bringing people back from the brink of death”).

It is hard not to think of the many sci-fi tales dealing with similar scenarios and imagine what might happen next. The same is true of work reported in “AI simulations of 1000 people accurately replicate their behaviour”, in which researchers are using the technology behind ChatGPT to replicate the thoughts and behaviours of specific individuals, with startling success.

The teams behind the work are blurring the lines of fact, fiction and what it means to be human

In both cases, the teams behind this research, blurring the lines of fact, fiction and what it means to be human, are deeply aware of the ethical concerns involved in their work, which is being conducted with strong ethical oversight, and with its details made public at an early stage. But now that the technology has been demonstrated, there is nothing to stop more nefarious groups attempting the same, without oversight and with the potential to cause great harm.

Does that mean the research should be banned, as Shelley’s book was, for fear of it getting into the wrong hands? Far from it. Concerns about tech are best dealt with through appropriate, evidence-based regulation and swift punishment for transgressors. When regulators overreach, we miss out not only on the technology but the chance to critique and discuss it.

Topics:

Source Link

You may also like

Leave a Comment

This website uses cookies. By continuing to use this site, you accept our use of cookies.