Categories
Latest
Popular

Microsoft’s 5-point Plan to Rein in AI as It Integrates into Modern Society

Microsoft Builiding
Image Source: https://pixabay.com/photos/building-cologne-facade-1011876/

In April this year, Elon Musk and other big names in tech signed a petition to call for a six-month pause on AI development to give room for the development of safeguards. Many lauded the move, even though it was revealed later on that Musk proceeded with his own AI development at Twitter despite his call for a lull.

Nevertheless, it is clear that tech companies understand the risks of AI. One of the loud voices in this push for AI safety and regulation is Microsoft. The company that is primarily utilizing the product of OpenAI’s years of AI R&D understands the need to install guardrails around artificial intelligence.

Microsoft’s five-point plan

Microsoft, being one of the top backers of OpenAI, is pushing for AI regulation. Microsoft President Brad Smith calls this the challenge of the 21st Century. The company is pushing for government involvement and action to ensure that AI inures to the benefit of humanity, not become an inimical force many fear it would be.

Futuristic Robot
Image Source: https://unsplash.com/photos/at0FNdX_0f8

For this, Microsoft laid out a five-point plan for AI regulation wherein the government plays a crucial role. The plan can be summed up as follows:

1. Formulation and implementation of government-driven safety frameworks for artificial intelligence.

In an hour-long speech at the House of Congress in Washington, Microsoft’s head honcho, Smith, presented an AI regulation plan that entails the need for government-led safety rules and regulations that apply to all aspects of AI. This AI safety framework includes inputs from industry players and the public, but it is important for the government to facilitate the crafting of rules and ensure comprehensiveness of policies. Smith echoed the call for an AI regulatory body by OpenAI CEO Sam Altman, who was also invited as resource person in Congress.

2. Implementing safety brakes for AI systems used in critical infrastructure operation

Critical infrastructure has been the subject of numerous cyber attacks in different parts of the world. The use of AI in telecommunication, transportation, utilities, and other crucial assets is set to boost efficiency, but it also raises the likelihood of attacks. Threat actors can launch adversarial machine learning attacks to mess up with the functioning of AI-powered automation. For this, it is vital to have safety brakes to isolate attacks and ensure effective mitigation and remediation.

3. Development of legal and regulatory framework based on AI’s tech architecture

Existing laws and government policies have not taken into account the impact of AI, and it is clear that policymakers do not understand the enormity of the technology’s impact. As such, it is important to carefully and meticulously study the consequences of AI’s integration into modern society. As Microsoft President Brad Smith asserted, the rule of law should govern AI in every part of its lifecycle, including its supply chain. “The rule of law and a commitment to democracy has kept technology in its proper place…We’ve done it before; we can do it again,” Smith said.

4. Ensuring transparency and access to AI for academic and nonprofit institutions

AI development requires advanced technologies and tons of funding. This creates the possibility of big companies monopolizing or “”oligopolizing” it. That’s why Microsoft seeks to ensure that academic and nonprofit institutions have access to the technology as well as opportunities to scrutinize it and provide inputs. Allowing profit-driven organizations to dominate AI can result in adverse consequences, especially in the areas of employment, competition, and business ethics.

5. Fostering public-private collaboration on using AI as a tool in resolving societal issues and challenges related to the adoption of AI and other advanced technologies

Lastly, it is essential that AI becomes a force for good. Its applications should result in the improvement of lives, not more risks and challenges. Like most other technologies, artificial intelligence has been regarded as a double-edged sword. It can benefit humanity, but it can also be harnessed by bad actors. The partnership between the public and private sectors is crucial in making sure that AI becomes an asset, not a tool for attacking people or letting big businesses

Robotics AI
Image Source: https://unsplash.com/photos/lUSFeh77gcs

Artificial intelligence appears inevitable in the modern world, and Big Tech companies understand the risks that come with it. Many in government and the academe have already started sounding the alarm and pushing for sensible regulation. Ultimately, everyone wants to ascertain that AI remains under human control. AI is now at a level that it has the potential to take over human jobs or become a tool for worsening sophisticated and aggressive cyber attacks, not just a tool for defeating CAPTCHAs. It is definitely a welcome development that one of the biggest names identified with AI is pushing for policies that make it clear that artificial intelligence should be directed at applications that better human life.