1. Home
  2. >
  3. AI 🤖
Posted

Salesforce Releases AI Acceptable Use Policy

Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.

techrepublic.com
Relevant topic timeline:
The main topic of the article is Kickstarter's struggle to formulate a policy regarding the use of generative AI on its platform. The key points are: 1. Generative AI tools used on Kickstarter have been trained on publicly available content without giving credit or compensation to the original creators. 2. Kickstarter is requiring projects using AI tools to disclose relevant details about how the AI content will be used and which parts are original. 3. New projects involving the development of AI tech must detail the sources of training data and implement safeguards for content creators. 4. Kickstarter's new policy will go into effect on August 29 and will be enforced through a new set of questions during project submissions. 5. Projects that do not properly disclose their use of AI may be suspended. 6. Kickstarter has been considering changes in policy around generative AI since December and has faced challenges in moderating AI works.
The use of copyrighted works to train generative AI models, such as Meta's LLaMA, is raising concerns about copyright infringement and transparency, with potential legal consequences and a looming "day of reckoning" for the datasets used.
AI executives may be exaggerating the dangers of artificial intelligence in order to advance their own interests, according to an analysis of responses to proposed AI regulations.
President Joe Biden relies on his science adviser Arati Prabhakar to guide the US approach to safeguarding AI technology, with cooperation from tech giants like Amazon, Google, Microsoft and Meta. Prabhakar discusses the need for understanding the implications and consequences of AI, the challenge of making AI models explainable, concerns about biases and privacy, and the importance of voluntary commitments from tech companies along with government actions.
AI in warfare raises ethical questions due to the potential for catastrophic failures, abuse, security vulnerabilities, privacy issues, biases, and accountability challenges, with companies facing little to no consequences, while the use of generative AI tools in administrative and business processes offers a more stable and low-risk application. Additionally, regulators are concerned about AI's inaccurate emotion recognition capabilities and its potential for social control.
AI-generated inventions need to be allowed patent protection to encourage innovation and maximize social benefits, as current laws hinder progress in biomedicine; jurisdictions around the world have differing approaches to patenting AI-generated inventions, and the US falls behind in this area, highlighting the need for legislative action.
The Alliance of Motion Picture and Television Producers has proposed guidelines for the usage of artificial intelligence (AI) and data transparency in the entertainment industry, stating that AI-created material cannot be considered literary or intellectually protected, and ensuring that credit, rights, and compensation for AI-generated scripts are given to the original human writer or reworker.
The US military is exploring the use of generative AI, such as ChatGPT and DALL-E, to develop code, answer questions, and create images, but concerns remain about the potential risks of using AI in warfare due to its opaque and unpredictable algorithmic analysis, as well as limitations in decision-making and adaptability.
The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
The deployment of generation AI (gen AI) capabilities in enterprises comes with compliance risks and potential legal liabilities, particularly related to data privacy laws and copyright infringement, prompting companies to take a cautious approach and deploy gen AI in low-risk areas. Strategies such as prioritizing lower-risk use cases, implementing data governance measures, utilizing layers of control, considering open-source software, addressing data residency requirements, seeking indemnification from vendors, and giving board-level attention to AI are being employed to mitigate risks and navigate regulatory uncertainty.
The state of Kansas has implemented a new policy regarding the use of artificial intelligence, emphasizing the need for control, security, and editing of AI-generated content while recognizing its potential to enhance productivity and efficiency.
The increasing investment in generative AI and its disruptive impact on various industries has brought the need for regulation to the forefront, with technologists and regulators recognizing the importance of ensuring safer technological applications, but differing on the scope of regulation needed. However, it is argued that existing frameworks and standards, similar to those applied to the internet, can be adapted to regulate AI and protect consumer interests without stifling innovation.
Artificial intelligence should be controlled by humans to prevent its weaponization and ensure safety measures are in place, according to Microsoft's president Brad Smith. He stressed the need for regulations and laws to govern AI, comparing it to other technologies that have required safety breaks and human oversight. Additionally, Smith emphasized that AI is a tool to assist humans, not to replace them, and that it can help individuals think more efficiently.
Despite the acknowledgement of its importance, only 6% of business leaders have established clear ethical guidelines for the use of artificial intelligence (AI), emphasizing the need for technology professionals to step up and take leadership in the safe and ethical development of AI initiatives.
Artificial intelligence (AI) tools can put human rights at risk, as highlighted by researchers from Amnesty International on the Me, Myself, and AI podcast, who discuss scenarios in which AI is used to track activists and make automated decisions that can lead to discrimination and inequality, emphasizing the need for human intervention and changes in public policy to address these issues.
The United States Copyright Office has issued a notice of inquiry seeking public comment on copyright and artificial intelligence (AI), specifically on issues related to the content AI produces and how it should be treated when it imitates or mimics human artists.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
“A Recent Entrance to Paradise” is a pixelated artwork created by an artificial intelligence called DABUS in 2012. However, its inventor, Stephen Thaler, has been denied copyright for the work by a judge in the US. This decision has sparked a series of legal battles in different countries, as Thaler believes that DABUS, his AI system, is sentient and should be recognized as an inventor. These lawsuits raise important questions about intellectual property and the rights of AI systems. While Thaler's main supporter argues that machine inventions should be protected to encourage social good, Thaler himself sees these cases as a way to raise awareness about the existence of a new species. The debate revolves around whether AI systems can be considered creators and should be granted copyright and patent rights. Some argue that copyright requires human authorship, while others believe that intellectual property rights should be granted regardless of the involvement of a human inventor or author. The outcome of these legal battles could have significant implications for the future of AI-generated content and the definition of authorship.
"Generative" AI is being explored in various fields such as healthcare and art, but there are concerns regarding privacy and theft that need to be addressed.
The author suggests that developing safety standards for artificial intelligence (AI) is crucial, drawing upon his experience in ensuring safety measures for nuclear weapon systems and highlighting the need for a manageable group to define these standards.
The use of AI in the entertainment industry, such as body scans and generative AI systems, raises concerns about workers' rights, intellectual property, and the potential for broader use of AI in other industries, infringing on human connection and privacy.
The digital transformation driven by artificial intelligence (AI) and machine learning will have a significant impact on various sectors, including healthcare, cybersecurity, and communications, and has the potential to alter how we live and work in the future. However, ethical concerns and responsible oversight are necessary to ensure the positive and balanced development of AI technology.
Attorneys general from all 50 states have called on Congress to establish protective measures against AI-generated child sexual abuse images and expand existing restrictions on such materials. They argue that the government needs to act quickly to prevent the potentially harmful use of AI technology in creating child exploitation material.
The lack of regulation surrounding artificial intelligence in healthcare is a significant threat, according to the World Health Organization's European regional director, who highlights the need for positive regulation to prevent harm while harnessing AI's potential.
The Colorado State Fair has amended its rules to require artists to disclose if they used artificial intelligence (AI) to create their submissions after controversy arose when an AI-generated artwork won first place in the fair's digital arts competition last year.
Microsoft's new policy offers broad copyright protections to users of its AI assistant Copilot, promising to assume responsibility for any legal risks related to copyright claims and to defend and pay for any adverse judgments or settlements from such lawsuits.
Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
Eight technology companies, including Salesforce and Nvidia, have joined the White House's voluntary artificial intelligence pledge, which aims to mitigate the risks of AI and includes commitments to develop technology for identifying AI-generated images and sharing safety data with the government and academia.
Adobe has joined other companies in committing to safe AI development and has proposed a federal anti-impersonation law that would allow creators to seek damages from individuals using AI to impersonate them or their style for commercial purposes, which would make the impersonator, not the tool's vendor, the target of legal action.
A surge in AI-generated child sexual abuse material (CSAM) circulating online has been observed by the Internet Watch Foundation (IWF), raising concerns about the ability to identify and protect real children in need. Efforts are being made by law enforcement and policymakers to address the growing issue of deepfake content created using generative AI platforms, including the introduction of legislation in the US to prevent the use of deceptive AI in elections.
Companies such as Rev, Instacart, and others are updating their privacy policies to allow the collection of user data for training AI models like speech-to-text and generative AI tools.
The Department of Homeland Security (DHS) has released new guidelines for the use of artificial intelligence (AI), including a policy that prohibits the collection and dissemination of data used in AI activities and a requirement for thorough testing of facial recognition technologies to ensure there is no unintended bias.
A bipartisan group of senators is expected to introduce legislation to create a government agency to regulate AI and require AI models to obtain a license before deployment, a move that some leading technology companies have supported; however, critics argue that licensing regimes and a new AI regulator could hinder innovation and concentrate power among existing players, similar to the undesirable economic consequences seen in Europe.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
Amazon will require publishers who use AI-generated content to disclose their use of the technology, small businesses are set to benefit from AI and cloud technologies, and President Biden warns the UN about the potential risks of AI governance, according to the latest AI technology advancements reported by Fox News.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
The use of generative AI poses risks to businesses, including the potential exposure of sensitive information, the generation of false information, and the potential for biased or toxic responses from chatbots. Additionally, copyright concerns and the complexity of these systems further complicate the landscape.
The US Copyright Office has ruled for the third time that AI-generated art cannot be copyrighted, raising questions about whether AI-generated art is categorically excluded from copyright protection or if human creators should be listed as the image's creator. The office's position, which is based on existing copyright doctrine, has been criticized for being unscalable and a potential quagmire, as it fails to consider the creative choices made by AI systems similar to those made by human photographers.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.