With a future full of uncertainty towards the abilities of artificial intelligence (AI) to cross borders with human lines, Intel showed the responsibility of bringing an only exciting future with its mega efforts in studying all of the AI abilities and aligning them to human needs, shaping the feelings with only excitement and anticipation to a never witnessed before innovation’s limitless abilities.
Speaking about the responsibility of the Multinational Corporation and Technology Company towards AI, Intel’s director of Intelligence System Research Lab, Lama Nachman spoke in a roundtable with a number of journalists about the adopted methods and strategies of the Intel teams that are focused on bringing AI innovation while addressing human concerns.
In her presentation, Nachman stated that for Intel, prioritising AI responsibility came only naturally after noticing public concerns towards privacy leakage, toxicity, and the use of AI to violate human rights. This led the team to establish a responsible AI advisory council in charge of assessing the ethical risks of using and building AI technologies in different fields and coming up with appropriate risk mitigation strategies.
“We are simply looking at every possible way that AI can go wrong, and try to be ahead of it,” she affirmed.
The responsible AI framework included four main pillars: internal and external governance, research and collaboration, product and solutions, and inclusive all.
“This council tries to explore everything we should think about when developing AI. From considering what AI should be used for, how to develop it responsibly, and methods to stop the misuse of AI, all of which is covered under governing the product internally and externally,” Nachman said in her speech. Beyond governance, across Intel, we are looking through the lens of research and all possible collaborations to reduce the barrier to responsible AI development. And finally, the council is engaged with internal and external programs to ensure that AI can be inclusive.”
Intel is highly focused on coming up with principles, ethics, trainings, as well as policies, and standards to ensure a bright future for AI. “Sometimes when you build something with the best of intentions, you face the possibility of misuse,” she explained. The main principles that are usually being assessed and tested in order to prove success include personal privacy, security and safety, human rights, equality and inclusion towards people of colour, she added.
Among the responsibilities of the lab is also to research methods of enhancing privacy and security which are the main concerns of the public.
The Intel team is currently working on understanding the better methods to govern the internal and external use of AI, and ensuring that the teams are diverse enough to look through all possible lenses, which at the end of the day, helps address various concerns towards AI.
As Intel adopted responsibility towards narrowing the misuse of AI to the maximum possible limit, came also rejected several collaborations that might jeopardise the highly red-lighted principles for Intel. However, this led the team to intellectually select refined projects that would not risk violating any of their principles. The lab team rather be engaged from the early stages of drafting any project, to make sure it abides by Intel’s fundamental basis. Nachman explained that taking part in the primary planning “can mitigate risks and redefine projects rather than simply cancel them. However, this wouldn’t stop the team from stepping down from taking part in any project that is too risky.
“Developing is getting faster and faster, and the ability to govern it is getting harder as there are always new technologies coming up that need to be given feedback to the teams in charge of. But from what I noticed, engaging the project groups in the whole inquiry phase ends up showing more evidence of having a responsible AI mindset in the design of the solution from the very beginning. Designers have become more aware and have been thinking about these in the context of their projects. So, they tend to solve that even before they come to us. Usually, developers are looking for improving their solutions and reducing ethical risk, and are typically receptive to our feedback,” Nachman explained.
On another scale, democratising AI comes on top of the list among Intel’s approaches to publicly using AI. Intel believes in the drastic changes AI is a few steps away from making in people’s everyday life and granting the public equal chances of using it, is a part of the democracy plan the company believes in.
According to Nachman, Intel is currently seeing the workplace changing dramatically because of AI. “So, looking into the future, if you want to have equity in a society, you will have to give access and democratise access to the knowledge of AI to the public,” she added. However, this also comes with a still considered concern with opening info sources which might increase the risk of duplicating projects.