- Redefining accountability: Why corporations, not tools, must answer for AI misuse.
- Creating future ethics from past failures and demanding corporate accountability in AI development.
The rapid advancement of AI has sparked both excitement and concern among the public. While there are valid reasons to be cautious about the potential misuse of AI, it is important to recognize that the risks are rooted much more in how AI is utilized than in the technology itself.
In particular, the actions of large IT corporations and giants in the industry can pose significant challenges if not properly regulated and governed.
We must address these concerns through the implementation of robust regulatory frameworks and industry ethics standards that ensure the ethical usage of data, transparent development for public accountability, and other practical issues of AI governance.
For example, stock photo provider Getty Images sued AI company Stability AI Inc, for abusing over 12 million Getty photos to train its AI image-generation system.
This is in addition to OpenAI, Microsoft, hit with an author copyright lawsuit over AI training. This is among several brought by copyright holders, including authors John Grisham, George R.R. Martin, and Jonathan Franzen, against tech firms like OpenAI.
The lawsuit follows a separate Getty case against Stability in the UK and another class-action complaint by artists in California against Stability and other companies in generative AI.
Need to address practical issues
First and foremost, it is crucial to dispel the notion that AI will imminently lead to an existential threat or dystopian future. Claims of a runaway AI apocalypse are largely exaggerated and do not accurately reflect the current state of the technology. Instead, our focus should be on addressing the practical issues at hand.
One of the main concerns stems from the use of AI by large IT corporations and giants. There have been numerous instances where these companies have engaged in questionable practices that raise ethical and societal concerns.
It is important to acknowledge these instances and work towards preventing similar behaviour in the future. The key argument is that the policies and behaviour of these corporations is separate from their AI tools and use. It is those which they must carefully assess.
Corporate behaviour and priorities
To mitigate these risks, we need comprehensive regulatory frameworks specifically tailored to the way corporations approach their operations in relation to AI development and AI use.
These frameworks should promote transparency, accountability, and responsible use of AI technologies, in the context of corporate behaviour and priorities.
By establishing clear guidelines and standards, we can ensure that AI is developed and deployed by these big actors in a manner that aligns with societal values and expectations.
In other words, the pathway of AI development must be steered correctly by the corporations themselves – it is not a separate entity.
Because of this, industry ethics play a crucial role in ensuring the responsible use of AI. Companies must adopt ethical guidelines that prioritize the well-being of users and society as a whole.
Minimising negative impacts
From data privacy to algorithmic bias, ethical considerations should be embedded in every stage of corporate use of AI as well as its development. This will help avoid unintended consequences and address potential challenges associated with AI deployment.
Achieving proper regulation and industry ethics in the AI field is not without its challenges.
However, it is crucial that we address these issues in a proactive and thoughtful manner. By doing so, we can harness the full potential of AI while minimising any negative impacts.
It is worth noting that the challenges we face are imminently solvable. With a concerted effort from policymakers, industry leaders, and other stakeholders, we can establish a regulatory landscape and industry standards that facilitate responsible AI use and development.
This will not only ensure the ethical use of AI technologies but also promote economic growth and global competitiveness.
The risks associated with AI should not discourage us from its development; rather, they should motivate us to create an environment where AI can thrive while being in harmony with our values and principles.
By fostering an ecosystem that encourages innovation, transparency, and responsible use of AI, we can reap the benefits of this transformative technology while successfully addressing any concerns.
The concerns surrounding AI development and the possibility of misuse of AI are not insurmountable.
By focusing on proper corporate regulation and industry ethics, we can harness the true potential of AI technology for the betterment of society.
- Dmitry Kaminskiy, General Partner at Deep Knowledge Group, a consortium of commercial and non-profit organisations active on multiple fronts in the realm of DeepTech and Frontier Technologies.