Hi Reddit! I’m Andy from the University of Bath.

I’m a lecturer in Artificial Intelligence (AI) and my research focusses on preparing AI for society and preparing society for AI. There are two key aspects to this research, the first aspect is technical and focussed on systems that use AI such as monitoring and maintenance. The aim of this is to answer questions such as: How do we know this will work in 5 years time? What safety procedures are in place if something goes wrong? How do we fix malfunctions? The second is much more societal, ensuring society is ready for AI.

To do this I work with members of the public to not only ensure everyone understands how AI is used but also to highlight and raise ethical, moral and sometimes legal issues surrounding AI. I believe these two tasks are essential as AI becomes more prominent throughout our lives, a prominence which is only growing day-by-day.

I’d love to answer your questions on AI, it’s role in society and/or recent applications of AI. Please Ask Me Anything!

I’ll be online to answer questions on Monday 9 October.

Proof: https://www.flickr.com/photos/uniofbath/53245794990/in/dateposted/

Thank you everyone for your questions today! I'm off to enjoy the sun here in the UK.

Comments: 72 • Responses: 11  • Date: 

David-J36 karma

How do you tackle all the ethical and legal problems generative AI is creating?

Almost all the popular ones, chat gpt, stable diffusion, etc are being sued for infringing on copyright and using content without permission.

That seems to be an issue people tend to ignore. How did this AI models became so good in the first place.

AndyBarnesAI29 karma

Great question, thanks for asking.

The ethical and legal issues around AI are still issues we don't have complete answers to. The legal frameworks for AI usage and development just aren't here yet. I believe the EU will be the first to introduce an AI legislative framework in 2025. In terms of tackling these challenges, at least at an ethical level, I believe it's down to the developers of these tools to take morality seriously. For example, the data used to train the AI systems, do we know where it is coming from? Are we happy with where it is coming from? If not, don't use it. I'd also make the argument for users to do the same.

In my ideal world every AI system would come with a label similar to the food labels we have in shops telling you where the AI is from, what data was used to make it and how it was trained, unfortunately we're not there yet. This would allow users to make an informed decision. In the mean time however it's down to all of us to ensure the tools we use/develop are built with society and people at their center.

Apologies I don't have a straight answer for you, ethics and legality are an ever evolving subject and until the law catches up we have to rely on our better judgement.

Thanks for the question,

Andy

kamace1125 karma

Curious about how AI seems to be being harnessed wide scale only for profit/to drive down labor costs. Does it have any foreseeable application in actually solving for inequality and human problems, vs. heightening them? I do believe it has the power to help a huge amount, but that it's being almost laughably applied in the wider public (taking over creative jobs, for example- one of the few professions humans take joy in).

Just spent a week in San Fran at the TechCrunch Disrupt event and it was honestly harrowing- AI company CEOs on DoD panels talking openly about when they'll need to "overcome the belief" that AI systems shouldn't be able to kill people without human input, for example... 0 meaningful panels on ethics. I get the impression that "ethics" such as they are in AI development are extremely subjective and based on the prejudices and whims of an extremely narrow set of people. Can you set me straight on that? It feels like conversation that is talked about, but never actually had.

AndyBarnesAI13 karma

Hi, thanks for the question!

On your first point about AI being used for societal good, I believe AI has already done a substantial amount of good for society although it isn't being splashed on the media. For example, AI based cancer screenings have helped doctors catch and eliminate premature cancer growths before they begin to spread. There are a lot of applications of AI which go beyond the public facing systems we see on the news and they are doing good. You could make the argument that with the wide-spread awareness and availability of AI that the use of AI tools will help lift some out of poverty. I'm not dismissing your point that AI has the potential to harm equality but I just want to say it can also help it to.

The question of ethics is a big one and should be had on a wider stage I completely agree. A lot of the work I am involved in at the moment is trying to spread awareness and collect the views and concerns of people in often forgot groups. One thing we've found is that the public actually love having these ethical discussions and want to participate in them which is why it's vital for the public to be involved in the development of any AI legislative framework. Governments are catching up; for example, I mentioned in a previous reply about the EU AI regulations which are coming soon but the UK government is also exploring this space with more and more vigor. The are catching up.

In summary, AI isn't all doom and gloom, we're doing some amazing stuff with it but as with any tool, put in the wrong hands it will do harm.

Hope that alleviates at least some of the concerns.

Andy

CuriousRedPandaBear10 karma

Your work sounds really interesting. What are the most common misconceptions that you come across about AI? Do you think media has a positive or negative effect on how we think of AI.

AndyBarnesAI14 karma

Ooooh another good question!

I ran a workshop over summer with around 25 retired people where I showed them some AI tools, they had some hands-on experience playing with them and then we discussed the ethical issues. One thing I found is that people were underwhelmed by what AI actually is. AI is just complicated data processing to create a model. More often than not people see AI as this mystical force when in actual fact it's just another data processing method, albeit a complicated method!

Another misconception I see often is the lack of distinction when discussing 'general intelligence'. Regular AI applications are only good at one thing, one task; for example, consider an AI which plays chess, take this AI and ask it to drive a car and it won't have a clue what to do! However, General intelligences are (currently fictional) models which can explore and learn how to resolve/complete new tasks with and without instruction, this is where your iRobots or Terminators come in! They're AI models which can pick up new skills and aren't limited in their capacity.

Regarding the media question I often find the negative news stories are the ones which get picked up the fastest. I replied to another comment earlier with a similar response, the good things we're doing with AI are often overlooked in preference for the fancy/wrong applications.

I hope this helps, thanks for the question!

Andy

Annual-Mud-9876 karma

Hi Andy, your work sounds really interesting and relevant to a lot of discussions in the media at the moment.

My question is why are there so many AI tools popping up right now? Have there been recent advances in AI or is it just that we're only now hearing about it? Thanks!

AndyBarnesAI6 karma

Hello! :)

Excellent question. I believe it's a combination of three key factors, accessibility, awareness and adaptability. The tools (and the techniques behind them) are based on extremely complex and computationally expensive models which most people wouldn't be able to run at home but with companies such as OpenAI providing access to these models through a simple web interface they've effectively removed the 'you need a powerful computer' barrier.

Adding on to this with the adaptive nature of new AI tools has allowed the public to use them for a range of scenarios without being overwhelmed by the range of different tools to do similar tasks. Finally, there's the awareness and this is something we've seen a lot of traction in over the past few years, now that AI and the associated tools are in the public domain and easily accessible it's much easier for people to form stories, opinions and use-cases about it.

Will it continue to gain traction? I think so given the current state of the tools.

I hope this answers your question :)

Andy

wackychimp4 karma

My question with AI is how do we weed out bad data? How do you ensure that your training dataset is correct & factual information - or as correct as possible?

If you feed a model like ChatGPT "everything on the web" then there's a lot of misinformation getting into your AI.

AndyBarnesAI4 karma

Hi and thanks for the question!

When it comes to bad data and datasets this is where quality control comes in. Before we even consider training an AI we need to ensure the data we have is clean and representative of the problem we're trying to solve. This is usually a manual process requiring expert input from people who know more about the problem than us AI nerds. For example, I do a lot of weather and storm forecasting using AI but in order to do that I need lots and lots of meteorological data. Now if I was to only use data from this year my model wouldn't be very good, likewise if I use data generated by a child my model would likely perform worse. Instead I use well looked after data sets from expert organisations such as the MetOffice/NOAA/ECMWF who perform rigorous cleaning and quality checks. The best way to ensure you have good quality data is to check with a domain expert.

Hope this answers your question :)

Andy

Mcshiggs3 karma

Do you have AI write your lectures on AI?

AndyBarnesAI4 karma

Shhh... Don't tell anyone...

I'm joking of course. Currently I have not used AI to write my lectures although I have used it in the past to provide an outline for certain documents I have written (for example, a teaching plan). I'd be extremely careful with trusting something like ChatGPT to write my lectures for me, especially with the number of quirks and issues which have been raised by using such a system to generate large amounts of text.

Thanks for the question!

Andy

oscar_w2 karma

Have you ever had a lucid dream? A dream in which you were fully aware that your physical body was asleep in bed whilst the you you've been familiar with your entire life is now aware within a dreamscape?

AndyBarnesAI1 karma

I can't say I have! I've certainly had some strange dreams after eating too much cheese the night before but never a lucid dream.

sophisticatedff2 karma

Hi Andy! To what extend do you implement AI in your professional work and/or personal life?

AndyBarnesAI4 karma

Hey! Thanks for the question.

Professionally I use AI a lot in my research, for example, I do a fair bit of extreme weather forecasting using AI and I have a number of projects on-going which are developing AI-based software systems. However, in my personal life I try to stay clear of AI and I like to leave the tech at work! It's nice having the balance and being able to escape the world of technology and computers after work. For example, I read, hike, play board games/role playing games, draw and garden, essentially anything to keep me away from a screen!

Andy

Fantastic_Return82291 karma

Hi andy. What according to you are underrated machine learning areas to work on?

AndyBarnesAI4 karma

Hi there!

In my experience people are desperate to apply machine learning and AI to their problems but just don't know how. So to answer your question I believe it's in the application of ML/AI to other disciplines; this could be civil engineering optimizing bridges, chemistry and the classification of certain drugs, medicine and the prevention of cancers or even in physics to identify dangerous commets. The applications are endless and we need more people to enter these fields as multi-disciplinary experts to help resolve this gap.

Hope this helps!

Andy

Yddalv1 karma

Hi 👋 Andy Tell me one reason not to think that AI will eventually replace entire human work force ? We thought that there would be renaissance in art and culture with “stupid” jobs removed but it seems that AI is doing that job fairly well (writing, drawing etc).

AndyBarnesAI9 karma

Hi and thanks for the interesting question!

The one thing I do not foresee AI ever being able to achieve is emotional intelligence, something us humans are experts at. This is the reason we've had automatic coffee machines for decades and yet we still prefer barista coffee (or at least I do). The minor interaction where they take my order to the imperfections in the latte are the reason AI hasn't replaced them and this is certainly something I do not see AI ever replacing, the 'human touch'.

I hope this is reason enough to convince you otherwise!

Andy

DrozdMensch-2 karma

When AI will replace all the people and eliminate them?

AndyBarnesAI3 karma

Hi there!

I do not believe AI will ever be capable of fully replacing humans and it's certainly unlikely any future intelligence will eliminate us all! I like to think we'd be smarter than that :)

Thanks,

Andy