Anthropic AI controls computers

Would you be willing to give control of your computer to an AI? This is the proposal from Anthropic, which announced the launch of Claude 3.5 Sonnet, an artificial intelligence model that pushes the boundaries of the classic virtual assistant. Claude no longer just answers questions or performs tasks in a chat box: he can interact directly with programs installed on the computer, simulating mouse clicks, keystrokes, and other actions normally performed by a human user.

Artificial intelligence controls your computer

We are entering a new era where AI can use all the tools that humans use to accomplish their tasks “, explains Jared Kaplan, Scientific Director of Anthropic. This innovation is a step towards creating “intelligent agents”, able to act autonomously in software and in many uses.

Cloud can therefore be used for all kinds of tasks, from programming to organizing trips. For example, in one demo, the AI ​​was tasked with planning a tour of the Golden Gate Bridge at sunrise. After opening the browser, Claude searched for the necessary information and added the event to the calendar. However, the AI ​​omitted important details such as directions to get there. Ooops!

While the capabilities of the Claude 3.5 Sonnet are impressive, they are not without risks. User security is a key concern, because allowing AI access to all programs and files on a computer could open the door to misuse or unexpected errors. Anthropic is aware of this risk and said it is working to take preventive measures. ” We believe it is better to give computers access to more limited and relatively safer AI models“, emphasized the company stressing the importance of monitoring potential problems now.

See also  The Internet Computer (PKI) comes out of nowhere to storm the top ten - the latest news

Instant injection attacks are among the threats identified. This type of cyberattack consists of inserting malicious instructions into the command flow intended for the artificial intelligence, causing it to perform actions that are not intended by the user. Even if Claude was not connected to the Internet during his training, his ability to interpret screenshots in real time makes him vulnerable to this type of attack.

In addition to unintended risks, Claude's malicious use is also a concern. As the US election approaches, Anthropic has put systems in place to prevent AI from engaging in malicious activities such as creating content for social media or interacting with government websites.

🟣 To not miss any news on Journal du Geek, subscribe to Google News. And if you like us, we have a newsletter every morning.

Frank Mccarthy

<p class="sign">"Certified gamer. Problem solver. Internet enthusiast. Twitter scholar. Infuriatingly humble alcohol geek. Tv guru."</p>

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top