A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
The U.S. Securities and Exchange Commission (SEC) has implemented new rules aimed at increasing transparency and accountability in the private equity and hedge fund industry, requiring the issuance of quarterly fee and performance reports, disclosure of fee structures, and annual audits, while banning preferential treatment for certain investors.
The Securities and Exchange Commission (SEC) may have suffered setbacks in its regulation-by-enforcement approach to the cryptocurrency industry, with the latest ruling in favor of Grayscale Investments potentially paving the way for the emergence of a bitcoin spot exchange-traded fund (ETF); however, the SEC could appeal the decision or find new ways to deny similar applications, and the lack of a regulated exchange for the bitcoin spot market remains a challenge. Despite court challenges, SEC Chair Gary Gensler is expected to continue pursuing his regulation tactics, while Congress and a potential Republican president in 2024 may play a role in shaping the regulatory environment for digital assets.
Eight more companies, including Adobe, IBM, Palantir, Nvidia, and Salesforce, have pledged to voluntarily follow safety, security, and trust standards for artificial intelligence (AI) technology, joining the initiative led by Amazon, Google, Microsoft, and others, as concerns about the impact of AI continue to grow.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
The Biden-Harris Administration has secured commitments from eight leading AI companies, including Adobe, IBM, and Salesforce, to advance the development of safe, secure, and trustworthy AI and bridge the gap to government action, emphasizing principles of safety, security, and trust.
The US Securities and Exchange Commission (SEC) is utilizing AI technology for market surveillance and enforcement actions to identify patterns of misconduct, leading to its request for more funding to expand its technological capabilities.
Recent Capitol Hill activity, including proposed legislation and AI hearings, provides corporate leaders with greater clarity on the federal regulation of artificial intelligence, offering insight into potential licensing requirements, oversight, accountability, transparency, and consumer protections.
The AI industry should learn from the regulatory challenges faced by the crypto industry and take a proactive approach in building relationships with lawmakers, highlighting the benefits of AI technology, and winning public support through campaigns in key congressional districts and states.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.