ACS Puts Spotlight On AI Challenges

ACS Puts Spotlight On AI Challenges
Image: iStock

A panel hosted by the Australian Computer Society (ACS), featuring Liz Bacon (a past President British Computer Society), Marita Cheng (Founder/CEO of Aubot and winner of Young Australian of the Year), Mike Hinchey (from the International Federation for Information Processing) and Anthony Wong (current President of the ACS) discussed what AI is and how it will impact the IT industry and society.

Hinchey started the discussion by noting one of the challenges we have in understanding AI is that we don’t know what intelligence is. He asked whether we want to build intelligence or just mimic it and noted that copying nature might not be the best way to build AI systems.

While we can learn lessons from nature, such as about thrust and forces, copying it is not always optimal otherwise we would design planes with flapping wings.

Part of the problem with AI is many people have merged the use of algorithms with the idea of AI. Hinchely was forthright in his view that using an algorithm is not AI.

Wong put forward a helpful model of what he considered AI to be. The maturity model he presented to the room said the most basic form of AI is process robotics where systems mimic human action. We then move up to intelligent automation where systems mimic or augment human judgement. Cognitive automation, where systems augment human intelligence were the next level up while artificial general intelligence (AGI) is considered the pinnacle. These are systems that can mimic human intelligence.

Wong said we are definitely not at the artificial general intelligence level. Hinchey agreed saying we will probably achieve AGI, at least not in our lifetimes.

The discussion then moved towards the inevitable discussion around disruption. Cheng has been working on a project regarding image recognition. Using a repository of over two billion images, she has developed a system that can recognise over 2000 different types of objects with greater accuracy than a human.

That ability to usurp humans from some roles is seen in a number of markets. Wong said that around 2000 Wall St hired 150,000 finance workers. By 2010, about 50,000 of those jobs were gone. Goldman Sachs once had 600 equity traders, now they just have two and they are supported by 200 computer engineers

About a third of Goldman Sachs workers are computer engineers. Wong surmised that one software engineer can replace four traders.

One of the challenges surrounding discussions around AI is that the conversation inevitably turns to machine learning. Cheng noted that there are a number of strong use-cases for Machine learning including autonomous vehicles, virtual assistants and chatbots.

Business model disruption from AI and machine learning comes from automating repetitive tasks. For example, steam engines during industrial revolution were able to replace people in the arduous task of extracting water from mine shafts.

While discussions about automation typically focus on reperisce tasks, Cheng said the focus might be better placed on repetitive outcomes.

For example, cleaning a hotel room requires many tasks that are difficult to automate, such as manoevering a vacuum cleaner into a corner. But the outcome, of a clean room is really the restive element. Busy focusing systems on outcomes rather than automating the actions of humans, we can potentially automate tasks that seem too complex.

The panel discussion ended with a look at some of the social and ethical considerations.

While automation does lead to job losses in some sectors we see growth in other sectors. The Wall Street example is a case in point although all the panelists agreed that some consideration needed to be given by governments in order to ensure workers that are pushed out of long-term roles are suitably supported as market needs for emerging skills evolve.

Hinchey noted that estimates of job losses are usually overestimated. Wong agreed, saying It’s not necessarily about replacing jobs but about changing tasks.

Bacon rounded out the discussion with some ethical considerations. What if an autonomous vehicle is programmed with an algorithm that would, in a crash situation, “sacrifice” the life of a pedestrian over that of the vehicle’s owner? Could the car’s designer, or the developer of the algorithm, be charged with a crime?

What if owners are given choices in the safety algorithms they can have their cars equipped with?

As one audience member said, there is not a country in the world where the law is keeping up with technological change.

Log in to comment on this story!