Highest Rated Comments


AndyBarnesAI29 karma

Great question, thanks for asking.

The ethical and legal issues around AI are still issues we don't have complete answers to. The legal frameworks for AI usage and development just aren't here yet. I believe the EU will be the first to introduce an AI legislative framework in 2025. In terms of tackling these challenges, at least at an ethical level, I believe it's down to the developers of these tools to take morality seriously. For example, the data used to train the AI systems, do we know where it is coming from? Are we happy with where it is coming from? If not, don't use it. I'd also make the argument for users to do the same.

In my ideal world every AI system would come with a label similar to the food labels we have in shops telling you where the AI is from, what data was used to make it and how it was trained, unfortunately we're not there yet. This would allow users to make an informed decision. In the mean time however it's down to all of us to ensure the tools we use/develop are built with society and people at their center.

Apologies I don't have a straight answer for you, ethics and legality are an ever evolving subject and until the law catches up we have to rely on our better judgement.

Thanks for the question,

Andy

AndyBarnesAI14 karma

Ooooh another good question!

I ran a workshop over summer with around 25 retired people where I showed them some AI tools, they had some hands-on experience playing with them and then we discussed the ethical issues. One thing I found is that people were underwhelmed by what AI actually is. AI is just complicated data processing to create a model. More often than not people see AI as this mystical force when in actual fact it's just another data processing method, albeit a complicated method!

Another misconception I see often is the lack of distinction when discussing 'general intelligence'. Regular AI applications are only good at one thing, one task; for example, consider an AI which plays chess, take this AI and ask it to drive a car and it won't have a clue what to do! However, General intelligences are (currently fictional) models which can explore and learn how to resolve/complete new tasks with and without instruction, this is where your iRobots or Terminators come in! They're AI models which can pick up new skills and aren't limited in their capacity.

Regarding the media question I often find the negative news stories are the ones which get picked up the fastest. I replied to another comment earlier with a similar response, the good things we're doing with AI are often overlooked in preference for the fancy/wrong applications.

I hope this helps, thanks for the question!

Andy

AndyBarnesAI13 karma

Hi, thanks for the question!

On your first point about AI being used for societal good, I believe AI has already done a substantial amount of good for society although it isn't being splashed on the media. For example, AI based cancer screenings have helped doctors catch and eliminate premature cancer growths before they begin to spread. There are a lot of applications of AI which go beyond the public facing systems we see on the news and they are doing good. You could make the argument that with the wide-spread awareness and availability of AI that the use of AI tools will help lift some out of poverty. I'm not dismissing your point that AI has the potential to harm equality but I just want to say it can also help it to.

The question of ethics is a big one and should be had on a wider stage I completely agree. A lot of the work I am involved in at the moment is trying to spread awareness and collect the views and concerns of people in often forgot groups. One thing we've found is that the public actually love having these ethical discussions and want to participate in them which is why it's vital for the public to be involved in the development of any AI legislative framework. Governments are catching up; for example, I mentioned in a previous reply about the EU AI regulations which are coming soon but the UK government is also exploring this space with more and more vigor. The are catching up.

In summary, AI isn't all doom and gloom, we're doing some amazing stuff with it but as with any tool, put in the wrong hands it will do harm.

Hope that alleviates at least some of the concerns.

Andy

AndyBarnesAI9 karma

Hi and thanks for the interesting question!

The one thing I do not foresee AI ever being able to achieve is emotional intelligence, something us humans are experts at. This is the reason we've had automatic coffee machines for decades and yet we still prefer barista coffee (or at least I do). The minor interaction where they take my order to the imperfections in the latte are the reason AI hasn't replaced them and this is certainly something I do not see AI ever replacing, the 'human touch'.

I hope this is reason enough to convince you otherwise!

Andy

AndyBarnesAI6 karma

Hello! :)

Excellent question. I believe it's a combination of three key factors, accessibility, awareness and adaptability. The tools (and the techniques behind them) are based on extremely complex and computationally expensive models which most people wouldn't be able to run at home but with companies such as OpenAI providing access to these models through a simple web interface they've effectively removed the 'you need a powerful computer' barrier.

Adding on to this with the adaptive nature of new AI tools has allowed the public to use them for a range of scenarios without being overwhelmed by the range of different tools to do similar tasks. Finally, there's the awareness and this is something we've seen a lot of traction in over the past few years, now that AI and the associated tools are in the public domain and easily accessible it's much easier for people to form stories, opinions and use-cases about it.

Will it continue to gain traction? I think so given the current state of the tools.

I hope this answers your question :)

Andy