Skip to main content

Not Just Another Discussion About Whether AI Is Going To Destroy Us

Not Just Another Discussion About Whether AI Is Going To Destroy Us

Next Story
An AI roundtable discussion is a staple of the tech journalism circus — usually framed with a preamble about dystopic threats to human existence from the inexorable rise of ‘super intelligence machines’. Just add a movie still from The Terminator.
What typically results from such a set-up is a tangled back and forth of viewpoints and anecdotes, where a coherent definition of AI fails to be an emergent property of the assembled learned minds. Nor is there clear consensus about what AI might mean for the future of humanity. After all, how can even the most well-intentioned groupthink predict the outcome of an unknown unknown?
None of this is surprising, given we humans don’t even know what human intelligence is. Thinking ourselves inside the metallic shell of ‘machine consciousness’ — whatever that might mean — is about as fruitful as trying to imagine what our thoughts might be if our own intelligence were embodied inside the flesh of a pear, rather the fleshy forms we do inhabit. Or if our consciousness existed fleetingly in liquid paint during the moment of animation by an artist’s intention. Philosophers can philosophize about the implications of AI, sure (and of course they do). But only an idiot would claim to know.
The panel discussion I attended this week at London’s hyper-trendy startup co-working hub Second Home trod plenty of this familiar ground. So I won’t rehash the usual arguments. Rather, and as some might argue making more like a machine — in the sense of acting like an algorithm trained to surface novelty from a mixed data dump — I’ve compiled a list (below) of some of the more interesting points that did emerge as panelists were asked to consider whether AI is “a force for good” (or not).
I’ve also listed some promising avenues for (narrow) AI mentioned by participants. So where they see potential for learning algorithms to solve problems humans might otherwise find tricky to crack — and also where those use-cases can be broadly considered socially beneficial, in an effort to steer the AI narrative away from bloodthirsty robots.
The last list is a summary of more grounded perceived threats/risks, i.e. those that don’t focus on the stereotypical doomsday scenario of future ‘superintelligent machines’ judging humans a waste of planetary space, but which are again focused on risks associated with the kind of narrow but proliferating — in terms of applications and usage — ‘AI’ we do already have.
One more point before switching to bullets and soundbites: the most concise description of (narrow) AI that emerged during the hour long discussion came from Tractable founder Alexandre Dalyac, who summed it up thus: “Algorithms compared to humans can usually tend to solve scale, speed or accuracy issues.”
So there you have it: AI, it’s all about scale, speed and accuracy. Not turning humans into liquid soap. But if you do want to concern yourself with where machine intelligence is headed, then thinking about how algorithmic scale, speed and accuracy — applied over more and more aspects of human lives — will impact and shape the societies we live in is certainly a question worth pondering.
Panelists
  • Calum Chace, author of ‘Surviving AI’
  • Dan Crow, CTO Songkick
  • Alexandre Dalyac, founder, Tractable
  • Dr Yasemin J Erden, Lecturer/Programme Director Philosophy, St Mary’s University
  • Martina King, CEO, Featurespace
  • Ben Medlock, founder, SwiftKey
  • Martin Mignot, Principal, Index Ventures
  • Jun Wang, Reader, Computer Science, UCL & Co-founder, CTO, MediaGamma
Discussion points of above average interest:
  • Should AI research be open source by default? How can we be expected to control and regulate the social impact of increasingly clever computing when the largest entities involved in AI fields like deep learning are commercial companies such as Google that do not divulge their proprietary algorithms?
“If the future of humanity is at stake should they be forced to open source it? Or how can we control what’s happening there?” asked Mignot. “I don’t think anyone knows what Google is doing. That’s one of the issues, that’s one of the worries we should have.”
A movement to open source machine learning-related research could also be a way to lessen public fears about the future impact of AI technologies, added Jun.
  • Will it be the case that the more generalist our machines become, the less capable and/or reliable for a particular task — and arguably, therefore, the less safe overall? Is that perhaps the trade-off when you try to make machines think outside a (narrow) box?
“One of the interesting philosophical questions is whether your ability to do a particular task with absolute focus — and reduce the false positives, increase the safety — actually requires a narrow form of intelligence. And at the point where our machines start to become more general, and sort of inherently more human-like, whether necessarily that introduces a reduction in safety,” posited Medlock.
“I can imagine that the kind of flexibility of the human brain, the plasticity to respond to so many different scenarios requires a reduction in specific abilities to do particular tasks. I think that’s going to be one of the interesting things that will emerge as we start to develop AGI [artificial general intelligence] — whether actually it becomes useful for a very different set of reasons to narrow AI.”
“I don’t think artificial intelligence in itself is what I would be concerned about, it’s more artificial stupidity. It’s the stupidity that comes with either a narrow focus, or a misunderstanding of the broader issues,” added Erden. “The difficulty in trying to establish all the little details that make up the context in which individual specific tasks happen.
“Once you try to ask individual programs to do very big things, and they need therefore to take into account lots of issues, then it becomes much more difficult.”
  • Should core questions of safety or wider ethical worries about machine-powered decision-making usurping human judgment be society’s biggest concern as learning algorithms proliferate? Can you even separate safety from ethics at that fuzzy juncture?
“The guys who built the Web put it up and out there and didn’t really think about the ethics at all. Didn’t think about putting those tools into the hands of people who would use those tools negatively, instead of positively. And I think we can take those lessons and apply them to new technologies,” argued King.
“A good example for the Web would be people believing that the laws of California were appropriate to everywhere around the world. And they aren’t, and they weren’t, and actually it took those Web companies a huge amount of time — and it was peer group pressure, lobby groups and so on — in order to get those organizations to behave actually appropriately for the laws of those individual countries they were operating in.”
They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.
“I’m a bit puzzled that people talk about AI ethics,” added Chace. “Machines may well be moral beings at some point but at the moment it’s not about ethics, it’s about safety. It’s about making sure that as AIs get more and more powerful that they are safe for humans. They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.”
  • Will society benefit from the increased efficiency of learning algorithms or will wealth be increasingly concentrated in the hands of (increasingly) few individuals?
“I’d suggest… whenever AI comes in, even potentially to replace labour, it’s genuinely because it’s an efficiency gain — so creating more. But then perhaps the way to think about it is how this efficiency gain is distributed. So if it’s concentrated in the hands of the owners perhaps that tends to be not of good value to society. But if the benefits accrue to society at large that’s potentially better,” said Dalyac.
“For example something that we’re working on is automating a task in the visual assessment of insurance claims. And the benefit of that would be to lower insurance premiums for car insurance… so this would be a case where the people who are usually employed to do this would find themselves out of work, so that might involve maybe 400 people in this country. But as a result you have 50 million people that benefit.”
  • Should something akin to the ‘philosophy of AI’ be taught in schools? Given we’re encouraging kids to learn coding, what about contextualizing that knowledge by also teaching them to think about the social impacts of increasingly clever and powerful decision-making machines?
“Should it be a discipline at school where students would learn about AI?” asked Mignot. “Could it be interesting to have classes around one step further. Once you know how to code a computer in a binary language, what does it mean to create an intelligent device?
“I think that would help a lot with the discussion because today coders don’t really understand the limitations and the potential of technology. What does it mean to be a machine that can learn by itself and make decisions? It’s so abstract as a concept that I think for people who are not working in the field it’s either too opaque to even consider, or really scary.”
  • Is the umbrella term ‘artificial intelligence’ actually an impediment to public awareness and understanding of myriad developments and (potential) benefits associated with algorithms that can adapt based on data input?
“We’re asking people to understand something that we’ve not really understood ourselves, or classified at least. So, when we’re talking about smartphones we’re not really talking about AI, we’re talking about some clever computing. We’re talking about some very interesting programming and the possibility that this programming can learn and adapt but in very, very simple ways,” said Erden.
“When you describe it like that to people I don’t think they’re either scared by it or fail to understand it. But if you describe this under the umbrella term of AI you promise too much, you disappoint a lot and you also confuse people… What’s wrong with saying ‘clever computing’? What’s wrong with saying ‘clever programming’? What’s wrong with saying ‘computational intelligence’?”
  • Is IBM’s ‘cognitive computing’ tech, Watson — purportedly branching out from playing Jeopardy to applying its algorithmic chops to very different fields, such as predictive medicine — more a case of clever marketing, than an example of an increasingly broad AI?
“I would say that if you take a look at the papers you’ll realize that Watson might just be pure branding. All it is is a very large team of researchers that have done really well on a single task, and have said ‘hey let’s call it Watson’, and let’s make it this ‘super intelligent being’, so the next time they ask us to do something intelligent we’ll get the same researchers, or similar researchers to work on something else,” argued Dalyac.
“We’re looking at automating the assessment of damage on cars, and there’s a paper by IBM Watson in 2012 which, to be honest, uses very, very old school AI — and AI that I can say for sure has nothing to do with winning at Jeopardy,” he added.
Promising applications for learning algorithms cited during the roundtable:
  • Helping websites weed out algorithmically generated ad clicks (the irony!)
  • Analyzing gamblers’ patterns of play to identify problematic tipping points
  • Monitoring skin lesions more effectively by using change point detection
  • Creating social AIs that can interact with autistic kids to reduce feelings of isolation
  • Tackling the complexity of language translation by using statistical approaches to improve machine translation
  • Putting sensors on surgical tools to model (and replicate) the perfect operation
  • Using data from motion sensors to predict when a frail elderly person might be at the risk of falling by analyzing behavioral patterns
Some near-term concerns about the proliferation of machine learning plus big data:  
  • How to regulate and control increasingly powerful and sophisticated data processing across borders where different laws might apply?
  • How to protect user privacy from predictive algorithms and ensure informed consent of data processing?
“Over the last decade or so the use of data has largely been something that happens below the surface. And users’ data gets passed around and fed to targeting networks and I think, and to some degree I hope, there will be a change over the next ten years or so where partly people become aware that the data that is collected, that characterizes the things they do, their likes and interests, that that’s an asset that actually is theirs to own and control,” argued Medlock.
“Moving towards consumers thinking about data a little bit like a currency in the same way that they use and own their own money, and that they’re able to make decisions about where they share that data… Moving the processing, manipulation and storage of data from the murky depths, to something that people are at least aware of and can make decisions about intentionally.”
  • How to respond to the accumulation of massive amounts of data — and the predictive insights that data can yield — in the hands of an increasingly powerful handful of technology companies?
“That will continue to be a challenge, for governments, for industry, for academia. We’re not going to solve that one quickly but there are a lot of people thinking hard about that,” said Crow. “If you look at some of the regulatory stuff that’s happening, certainly in the EU and starting to happen in the US as well, I think you are seeing people at least understanding there’s a concern there now.
“And that this is an area where government needs to play an effective role. I don’t think we know exactly what that looks like yet — I don’t think we’ve finished that discussion. But at least a discussion is happening now and I think that’s really important.”
  • How to avoid algorithmic efficiencies destroying jobs and concentrating more and more wealth in the hands of fewer and fewer individuals?
A survey of U.K. users conducted by SwiftKey ahead of the panel discussion found that fear of jobs being made redundant by advances in AI was of concern to the majority (52 per cent) of respondents. While just a third (36 per cent) said they want to see AI having a bigger role in society — implying that two-thirds would prefer checks and balances on the proliferation of machine learning technologies.
Bottom line, if increasing algorithmic efficiency is destroying more jobs than it’s creating then massive social re-structuring is inevitable. So human brains seeking to ask questions about who benefits from such accelerated change, and what kind of society people want to live in, is surely just prudent due diligence — not to mention the very definition of (biological) intelligence.

Comments

Popular posts from this blog

sxhkd volume andbrightness config for dwm on void

xbps-install  sxhkd ------------ mkdir .config/sxhkd cd .config/sxhkd nano/vim sxhkdrc -------------------------------- XF86AudioRaiseVolume         amixer -c 1 -- sset Master 2db+ XF86AudioLowerVolume         amixer -c 1 -- sset Master 2db- XF86AudioMute         amixer -c 1 -- sset Master toggle alt + shift + Escape         pkill -USR1 -x sxhkd XF86MonBrightnessUp          xbacklight -inc 20 XF86MonBrightnessDown          xbacklight -dec 20 ------------------------------------------------------------- amixer -c card_no -- sset Interface volume run alsamixer to find card no and interface names xbps-install -S git git clone https://git.suckless.org/dwm xbps-install -S base-devel libX11-devel libXft-devel libXinerama-devel  vim config.mk # FREETYPEINC = ${X11INC}/freetype2 #comment for non-bsd make clean install   cp config.def.h config.h vim config.h xbps-install -S font-symbola #for emoji on statusbar support     void audio config xbps-i

Hidden Wiki

Welcome to The Hidden Wiki New hidden wiki url 2015 http://zqktlwi4fecvo6ri.onion Add it to bookmarks and spread it!!! Editor's picks Bored? Pick a random page from the article index and replace one of these slots with it. The Matrix - Very nice to read. How to Exit the Matrix - Learn how to Protect yourself and your rights, online and off. Verifying PGP signatures - A short and simple how-to guide. In Praise Of Hawala - Anonymous informal value transfer system. Volunteer Here are five different things that you can help us out with. Plunder other hidden service lists for links and place them here! File the SnapBBSIndex links wherever they go. Set external links to HTTPS where available, good certificate, and same content. Care to start recording onionland's history? Check out Onionland's Museum Perform Dead Services Duties. Introduction Points Ahmia.fi - Clearnet search engine for Tor Hidden Services (allows you

download office 2021 and activate

get office from here  https://tb.rg-adguard.net/public.php open powershell as admin (win+x and a ) type cmd  goto insall dir 1.         cd /d %ProgramFiles(x86)%\Microsoft Office\Office16 2.           cd /d %ProgramFiles%\Microsoft Office\Office16 try 1 or 2 depending on installation  install volume license  for /f %x in ('dir /b ..\root\Licenses16\ProPlus2021VL_KMS*.xrm-ms') do cscript ospp.vbs /inslic:"..\root\Licenses16\%x" activate using kms cscript ospp.vbs /setprt:1688 cscript ospp.vbs /unpkey:6F7TH >nul cscript ospp.vbs /inpkey:FXYTK-NJJ8C-GB6DW-3DYQT-6F7TH cscript ospp.vbs /sethst:s8.uk.to cscript ospp.vbs /act Automatic script (windefender may block it) ------------------------------------------------------------------------------------------------------------------- @echo off title Activate Microsoft Office 2021 (ALL versions) for FREE - MSGuides.com&cls&echo =====================================================================================&