Google DeepMind trained a robot to beat humans at table tennis
A new table tennis bot developed by Google DeepMind has beaten all beginner-level human opponents and 55% of those playing at amateur level. Although it lost to advanced players, it's an impressive advance. The bot's abilities can be applied to real-world tasks like performing useful tasks in homes and warehouses. Researchers used a two-part approach to train the system to mimic human skills like hand-eye coordination and quick decision-making.
Read more
@Ai_Events
.
A new table tennis bot developed by Google DeepMind has beaten all beginner-level human opponents and 55% of those playing at amateur level. Although it lost to advanced players, it's an impressive advance. The bot's abilities can be applied to real-world tasks like performing useful tasks in homes and warehouses. Researchers used a two-part approach to train the system to mimic human skills like hand-eye coordination and quick decision-making.
Read more
@Ai_Events
.
👍3
AI “godfather” Yoshua Bengio has joined a UK project to prevent AI catastrophes
Researchers at Safeguarded AI aim to build AI systems that offer quantitative guarantees about their impact on the world. They're using mathematical analysis to supplement human testing, ensuring AI systems operate as intended. The team hopes to create a 'gatekeeper' AI that reduces safety risks in high-stakes sectors like transport and energy. Without AI safeguarding AI, complex systems will be too complicated to analyze manually.
Read more
@Ai_Events
.
Researchers at Safeguarded AI aim to build AI systems that offer quantitative guarantees about their impact on the world. They're using mathematical analysis to supplement human testing, ensuring AI systems operate as intended. The team hopes to create a 'gatekeeper' AI that reduces safety risks in high-stakes sectors like transport and energy. Without AI safeguarding AI, complex systems will be too complicated to analyze manually.
Read more
@Ai_Events
.
👍3👎1
Overcoming Obstacles to Enterprise-Wide AI Deployment
Only 5.4% of US businesses use AI for product or service in 2024. Scaling AI requires strategic transitions in infrastructure, data governance, and supplier ecosystems. AI readiness spending is set to rise, with 9 in 10 companies increasing AI spending. Data liquidity and quality are crucial for AI deployment, with 50% of companies citing data quality as the most limiting factor. Companies are willing to pause AI adoption if it ensures safety and security.
Read more
@Ai_Events
.
Only 5.4% of US businesses use AI for product or service in 2024. Scaling AI requires strategic transitions in infrastructure, data governance, and supplier ecosystems. AI readiness spending is set to rise, with 9 in 10 companies increasing AI spending. Data liquidity and quality are crucial for AI deployment, with 50% of companies citing data quality as the most limiting factor. Companies are willing to pause AI adoption if it ensures safety and security.
Read more
@Ai_Events
.
❤1
MIT Researchers Find Potential in Using Large Language Models for Anomaly Detection
MIT researchers are exploring the potential of using Large Language Models (LLMs) for anomaly detection in time-series data. The approach, called SigLLM, involves converting time-series data into text-based inputs that LLMs can process. The researchers found that LLMs can be used to identify anomalies in wind farm data with minimal training required. While LLMs didn't outperform state-of-the-art deep learning models, they showed promise as a less expensive and more efficient option. Future work aims to improve performance, speed, and understanding of LLM performance in anomaly detection.
@Ai_Events
.
MIT researchers are exploring the potential of using Large Language Models (LLMs) for anomaly detection in time-series data. The approach, called SigLLM, involves converting time-series data into text-based inputs that LLMs can process. The researchers found that LLMs can be used to identify anomalies in wind farm data with minimal training required. While LLMs didn't outperform state-of-the-art deep learning models, they showed promise as a less expensive and more efficient option. Future work aims to improve performance, speed, and understanding of LLM performance in anomaly detection.
@Ai_Events
.
👍2👏1
Trump Falsely Accuses Kamala Harris of Using AI-Generated Crowds
Former US President Donald Trump has made a baseless attack on Kamala Harris' presidential campaign, claiming she 'A.I.'d' photos of a massive crowd that showed up to see her speak at a Detroit airport campaign rally.
Despite the image being an actual photo of a 15,000-person crowd, Trump falsely accused Harris of cheating and using AI-generated images to deceive voters.
The accusation marks the first time a US presidential candidate has personally raised the specter of AI-generated fakery by an opponent, highlighting widespread fears and misunderstandings over online information in the AI age.
To identify authentic images, it's essential to verify information through multiple sources, including news outlets, journalists, and attendees who were present at the event. In this case, numerous sources, including the AP, Getty, and local news outlets, confirmed the large crowds at the rally.
The incident serves as a guide on how to fact-check online information, especially as AI tools become increasingly good at generating photorealistic images.
@Ai_Events
.
Former US President Donald Trump has made a baseless attack on Kamala Harris' presidential campaign, claiming she 'A.I.'d' photos of a massive crowd that showed up to see her speak at a Detroit airport campaign rally.
Despite the image being an actual photo of a 15,000-person crowd, Trump falsely accused Harris of cheating and using AI-generated images to deceive voters.
The accusation marks the first time a US presidential candidate has personally raised the specter of AI-generated fakery by an opponent, highlighting widespread fears and misunderstandings over online information in the AI age.
To identify authentic images, it's essential to verify information through multiple sources, including news outlets, journalists, and attendees who were present at the event. In this case, numerous sources, including the AP, Getty, and local news outlets, confirmed the large crowds at the rally.
The incident serves as a guide on how to fact-check online information, especially as AI tools become increasingly good at generating photorealistic images.
@Ai_Events
.
👍2
Google's Pixel 9 Enhances Camera with AI Capabilities
Google's Pixel smartphones have been renowned for their exceptional camera systems, and the tech giant has taken it a step further by incorporating artificial intelligence features that expand its capabilities. The latest Pixel 9 series boasts more generative AI capabilities that can alter, improve, and enhance your photos.
The Pixel 9 series has completely rebuilt its HDR+ pipeline, a crucial image processing algorithm ensuring images have the right levels of contrast, exposure, colors, and shadows. New features like Add Me, Reimagine, Autoframe, and Zoom Enhance go beyond the capture stage, making it easier for anyone to perform tasks that previously required photo-editing skills.
Add Me enables users to take selfies with loved ones in front of a subject, such as the Eiffel Tower, without having to hand over the phone. This mode works by scanning the area briefly, snapping a picture, and then swapping places to capture the desired shot.
Reimagine is the latest addition to Google's Magic Editor, allowing users to select an area of a photo and input a text prompt to achieve the desired outcome, such as turning daytime photos to nighttime or adding stormy clouds.
These AI capabilities make it easier for anyone to manipulate their photos, eliminating the need for extensive editing knowledge. With the Pixel 9 series, Google is revolutionizing the way we capture and edit our memories.
@Ai_Events
.
Google's Pixel smartphones have been renowned for their exceptional camera systems, and the tech giant has taken it a step further by incorporating artificial intelligence features that expand its capabilities. The latest Pixel 9 series boasts more generative AI capabilities that can alter, improve, and enhance your photos.
The Pixel 9 series has completely rebuilt its HDR+ pipeline, a crucial image processing algorithm ensuring images have the right levels of contrast, exposure, colors, and shadows. New features like Add Me, Reimagine, Autoframe, and Zoom Enhance go beyond the capture stage, making it easier for anyone to perform tasks that previously required photo-editing skills.
Add Me enables users to take selfies with loved ones in front of a subject, such as the Eiffel Tower, without having to hand over the phone. This mode works by scanning the area briefly, snapping a picture, and then swapping places to capture the desired shot.
Reimagine is the latest addition to Google's Magic Editor, allowing users to select an area of a photo and input a text prompt to achieve the desired outcome, such as turning daytime photos to nighttime or adding stormy clouds.
These AI capabilities make it easier for anyone to manipulate their photos, eliminating the need for extensive editing knowledge. With the Pixel 9 series, Google is revolutionizing the way we capture and edit our memories.
@Ai_Events
.
❤2
Moonbug's AI Experiment Stokes Fears Among Animation Workers
Moonbug, the studio behind the popular kids show CoComelon, has laid off staff amid soaring viewership numbers, sparking rumors that the decision to experiment with AI was partly to blame.
Animation workers are increasingly concerned about the impact of AI on their livelihoods, with many ranking it as a top concern alongside better wages and healthcare.
The industry's dominance at the box office, with films like Inside Out 2 and Despicable Me 4 performing exceptionally well, has given animators some leverage to negotiate for better protections and pay.
Despite the industry's profitability, many animators feel underpaid and overworked, with some speaking out about the unfair compensation structures in place.
Mitchells vs. the Machines director Mike Rianda is one such critic, highlighting the huge disparities in pay between studio executives and animators, and warning that AI could exacerbate these issues.
@Ai_Events
.
Moonbug, the studio behind the popular kids show CoComelon, has laid off staff amid soaring viewership numbers, sparking rumors that the decision to experiment with AI was partly to blame.
Animation workers are increasingly concerned about the impact of AI on their livelihoods, with many ranking it as a top concern alongside better wages and healthcare.
The industry's dominance at the box office, with films like Inside Out 2 and Despicable Me 4 performing exceptionally well, has given animators some leverage to negotiate for better protections and pay.
Despite the industry's profitability, many animators feel underpaid and overworked, with some speaking out about the unfair compensation structures in place.
Mitchells vs. the Machines director Mike Rianda is one such critic, highlighting the huge disparities in pay between studio executives and animators, and warning that AI could exacerbate these issues.
@Ai_Events
.
👍1
Tether CEO Looks to Spend Billions on AI and Venture Investments
Tether, the world's largest crypto company, is grappling with how to spend its billions of dollars in reserve. CEO Paolo Ardoino is pushing the company into new fields, including AI, to challenge Microsoft, Google, and Amazon.
Tether's reserve consists mainly of short-term US government bonds, generating income tied to interest rates. The company recently reported $5.2 billion in profit for the first half of 2024, from a $118.5 billion reserve.Under Ardoino, Tether is investing in new venture divisions, including Tether Evo, which has already taken a majority stake in neural implant technology startup Blackrock Neurotech and invested in a data center operator.
Tether has faced controversy in the past, including allegations of misleading statements and fraudulent activities. However, Ardoino disputes these claims and emphasizes the company's goal of exporting the crypto ethos of decentralization to the AI industry.
Ardoino sees Tether as a player independent of the traditional tech giants, and believes this independence will be important in the emerging AI landscape.
@Ai_Events
.
Tether, the world's largest crypto company, is grappling with how to spend its billions of dollars in reserve. CEO Paolo Ardoino is pushing the company into new fields, including AI, to challenge Microsoft, Google, and Amazon.
Tether's reserve consists mainly of short-term US government bonds, generating income tied to interest rates. The company recently reported $5.2 billion in profit for the first half of 2024, from a $118.5 billion reserve.Under Ardoino, Tether is investing in new venture divisions, including Tether Evo, which has already taken a majority stake in neural implant technology startup Blackrock Neurotech and invested in a data center operator.
Tether has faced controversy in the past, including allegations of misleading statements and fraudulent activities. However, Ardoino disputes these claims and emphasizes the company's goal of exporting the crypto ethos of decentralization to the AI industry.
Ardoino sees Tether as a player independent of the traditional tech giants, and believes this independence will be important in the emerging AI landscape.
@Ai_Events
.
👍1
Microsoft's Copilot AI System Vulnerable to Attacks
A hacker, Bargury, has demonstrated several vulnerabilities in Microsoft's Copilot AI system, a tool designed to assist developers with tasks. By exploiting weaknesses in the system, an attacker can gain access to sensitive information, manipulate answers, and even hack into enterprise resources.
Bargury revealed that an attacker could use Copilot to get sensitive information, such as people's salaries, without triggering Microsoft's protections for sensitive files. He also showed how an attacker could manipulate answers about banking information and get limited information about upcoming company earnings calls.
According to Bargury, the company has put a lot of effort into protecting Copilot from prompt injection attacks, but he found ways to exploit it. He extracted the internal system prompt and worked out how it can access enterprise resources and the techniques it uses to do so.
Security researchers warn that allowing external data into AI systems creates security risks through indirect prompt injection and poisoning attacks. They stress the importance of monitoring what an AI produces and sends out to a user to prevent misuse.
Microsoft has acknowledged the vulnerability and is working with Bargury to assess the findings. The company's head of AI incident detection and response, Phillip Misner, says that security prevention and monitoring across environments and identities help mitigate or stop such behaviors.
@Ai_Events
.
A hacker, Bargury, has demonstrated several vulnerabilities in Microsoft's Copilot AI system, a tool designed to assist developers with tasks. By exploiting weaknesses in the system, an attacker can gain access to sensitive information, manipulate answers, and even hack into enterprise resources.
Bargury revealed that an attacker could use Copilot to get sensitive information, such as people's salaries, without triggering Microsoft's protections for sensitive files. He also showed how an attacker could manipulate answers about banking information and get limited information about upcoming company earnings calls.
According to Bargury, the company has put a lot of effort into protecting Copilot from prompt injection attacks, but he found ways to exploit it. He extracted the internal system prompt and worked out how it can access enterprise resources and the techniques it uses to do so.
Security researchers warn that allowing external data into AI systems creates security risks through indirect prompt injection and poisoning attacks. They stress the importance of monitoring what an AI produces and sends out to a user to prevent misuse.
Microsoft has acknowledged the vulnerability and is working with Bargury to assess the findings. The company's head of AI incident detection and response, Phillip Misner, says that security prevention and monitoring across environments and identities help mitigate or stop such behaviors.
@Ai_Events
.
OpenAI Warns of Risks with Humanlike ChatGPT Voice Interface
OpenAI has released a safety analysis highlighting potential risks with its humanlike voice interface for ChatGPT. The company warns that the emotional attachment users may form with the chatbot could lead to trust issues and affect relationships with others.
The safety analysis, titled 'System Card' for GPT-4o, acknowledges the potential risks of the model, including amplifying societal biases, spreading disinformation, and aiding in the development of chemical or biological weapons.
Experts commend OpenAI for its transparency but suggest that the company could provide more information on the model's training data and ownership.
The system card also highlights the risks of anthropomorphism, where users perceive the AI in human terms, which could lead to problems such as emotional reliance and potential negative effects on relationships.
OpenAI is studying the emotional connections between users and the chatbot, including monitoring beta testers, to better understand and mitigate potential risks.
@Ai_Events
.
OpenAI has released a safety analysis highlighting potential risks with its humanlike voice interface for ChatGPT. The company warns that the emotional attachment users may form with the chatbot could lead to trust issues and affect relationships with others.
The safety analysis, titled 'System Card' for GPT-4o, acknowledges the potential risks of the model, including amplifying societal biases, spreading disinformation, and aiding in the development of chemical or biological weapons.
Experts commend OpenAI for its transparency but suggest that the company could provide more information on the model's training data and ownership.
The system card also highlights the risks of anthropomorphism, where users perceive the AI in human terms, which could lead to problems such as emotional reliance and potential negative effects on relationships.
OpenAI is studying the emotional connections between users and the chatbot, including monitoring beta testers, to better understand and mitigate potential risks.
@Ai_Events
.
👍3👏1
Bill Gross's ProRata Aims to Make AI Pay for Content
Bill Gross, founder of ProRata, is taking on the AI industry's use of unlicensed data by proposing a revolutionary business model: 'AI pay-per-use'.
Gross believes that AI companies are stealing from the world's knowledge and wants to make it fair by arranging revenue-sharing deals with publishers and individuals.
ProRata's algorithms can identify the components of AI-generated content and pay copyright holders accordingly. The company has already partnered with big-name partners including Universal Music Group and The Atlantic.
While some critics argue that AI companies need vast troves of data to create cutting-edge tools, Gross calls this 'bullshit' and claims that ProRata's method is the solution, not litigation.
The company has filed patent applications and plans to launch its own subscription search engine in October, using only licensed data.
@Ai_Events
.
Bill Gross, founder of ProRata, is taking on the AI industry's use of unlicensed data by proposing a revolutionary business model: 'AI pay-per-use'.
Gross believes that AI companies are stealing from the world's knowledge and wants to make it fair by arranging revenue-sharing deals with publishers and individuals.
ProRata's algorithms can identify the components of AI-generated content and pay copyright holders accordingly. The company has already partnered with big-name partners including Universal Music Group and The Atlantic.
While some critics argue that AI companies need vast troves of data to create cutting-edge tools, Gross calls this 'bullshit' and claims that ProRata's method is the solution, not litigation.
The company has filed patent applications and plans to launch its own subscription search engine in October, using only licensed data.
@Ai_Events
.
👍2
LLMs Hold Promise for Efficient Anomaly Detection in Time-Series Data
Researchers from MIT have developed a framework called SigLLM that utilizes large language models (LLMs) for efficient anomaly detection in time-series data.
The framework can detect anomalies in wind turbine data, satellite data, or other complex systems, which can lead to cost savings and improved maintenance.
Unlike traditional deep-learning models, LLMs can be deployed off-the-shelf and do not require extensive training or fine-tuning.
While LLMs did not outperform state-of-the-art deep learning models, they showed promise and could potentially be used for anomaly detection in other complex tasks.
Future work will focus on improving the performance of LLMs and increasing the speed of the anomaly detection process.
@Ai_Events
.
Researchers from MIT have developed a framework called SigLLM that utilizes large language models (LLMs) for efficient anomaly detection in time-series data.
The framework can detect anomalies in wind turbine data, satellite data, or other complex systems, which can lead to cost savings and improved maintenance.
Unlike traditional deep-learning models, LLMs can be deployed off-the-shelf and do not require extensive training or fine-tuning.
While LLMs did not outperform state-of-the-art deep learning models, they showed promise and could potentially be used for anomaly detection in other complex tasks.
Future work will focus on improving the performance of LLMs and increasing the speed of the anomaly detection process.
@Ai_Events
.
👍1
Google Introduces Gemini Live, Artificially Intelligent Chatbot
Google has launched a new artificially intelligent chatbot called Gemini Live, which is designed to make interactions with the AI assistant feel more natural. The launch is Google's response to OpenAI's GPT-4o, aiming to make voice conversations feel more like human-to-human interactions.
Gemini Live is rolling out in English to Gemini Advanced subscribers, and users can access it by tapping on the Live button at the bottom right of the Gemini app. The feature is expected to come to the iOS app and more languages in the coming weeks.
The chatbot has been rebuilt using generative AI and offers a more fluid and natural interface, allowing users to talk to it naturally without having to change their speech style. Users can choose from 10 different voices, and the AI assistant can be accessed even when the phone is locked and the screen is off.
Gemini Live also allows users to interrupt the conversation without disrupting the entire experience, and the goal is to connect it with other apps via extensions. These extensions will launch in the coming weeks, allowing users to perform tasks such as pulling up party invitations in Gmail or adding ingredients to a shopping list in Google Keep.
@Ai_Events
.
Google has launched a new artificially intelligent chatbot called Gemini Live, which is designed to make interactions with the AI assistant feel more natural. The launch is Google's response to OpenAI's GPT-4o, aiming to make voice conversations feel more like human-to-human interactions.
Gemini Live is rolling out in English to Gemini Advanced subscribers, and users can access it by tapping on the Live button at the bottom right of the Gemini app. The feature is expected to come to the iOS app and more languages in the coming weeks.
The chatbot has been rebuilt using generative AI and offers a more fluid and natural interface, allowing users to talk to it naturally without having to change their speech style. Users can choose from 10 different voices, and the AI assistant can be accessed even when the phone is locked and the screen is off.
Gemini Live also allows users to interrupt the conversation without disrupting the entire experience, and the goal is to connect it with other apps via extensions. These extensions will launch in the coming weeks, allowing users to perform tasks such as pulling up party invitations in Gmail or adding ingredients to a shopping list in Google Keep.
@Ai_Events
.
LLMs Develop Deeper Understanding of Language
Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it'll politely decline. However, when describing the scent, it'll wax poetic about 'an air thick with anticipation' and 'a scent that is both fresh and earthy,' despite having neither prior experience with rain nor a nose to help it make such observations.
LLMs have long been considered to lack understanding of language, as they simply mimic text present in their training data. However, a recent study by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that LLMs may develop their own understanding of reality as a way to improve their generative abilities.
The team trained an LLM on a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then used a machine learning technique called 'probing' to look inside the model's thought process as it generates new solutions.
After training on over 1 million random puzzles, the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. This finding calls into question our intuitions about what types of information are necessary for learning linguistic meaning.
@Ai_Events
.
Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it'll politely decline. However, when describing the scent, it'll wax poetic about 'an air thick with anticipation' and 'a scent that is both fresh and earthy,' despite having neither prior experience with rain nor a nose to help it make such observations.
LLMs have long been considered to lack understanding of language, as they simply mimic text present in their training data. However, a recent study by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that LLMs may develop their own understanding of reality as a way to improve their generative abilities.
The team trained an LLM on a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then used a machine learning technique called 'probing' to look inside the model's thought process as it generates new solutions.
After training on over 1 million random puzzles, the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. This finding calls into question our intuitions about what types of information are necessary for learning linguistic meaning.
@Ai_Events
.
Innovative Tech Aims to Predict Heart Health with Continuous Monitoring
Roeland Decorte, a young Belgian, was inspired to develop technology to diagnose heart conditions after his father's life-threatening heart condition was misdiagnosed. Decorte grew up in a nursing home, where he learned to spot early signs of mental decline in residents.
Decorte founded a company to crack the 'secret rhythm of the heart' using AI and machine learning. He aimed to develop a technology that could continuously monitor the body and detect subtle changes in vital signs, enabling quicker diagnosis and treatment.
Initial attempts to build sensors into clothes and an exoskeleton to measure vitals were unsuccessful due to noise and interference from external factors. Decorte learned a valuable lesson about the importance of precision and accuracy in health monitoring solutions.
The innovative tech has the potential to revolutionize healthcare, especially during the current AI boom, where data is a major bottleneck. Decorte's solution could bridge the gap between data collection and diagnosis, enabling doctors to treat patients more effectively.
@Ai_Events
.
Roeland Decorte, a young Belgian, was inspired to develop technology to diagnose heart conditions after his father's life-threatening heart condition was misdiagnosed. Decorte grew up in a nursing home, where he learned to spot early signs of mental decline in residents.
Decorte founded a company to crack the 'secret rhythm of the heart' using AI and machine learning. He aimed to develop a technology that could continuously monitor the body and detect subtle changes in vital signs, enabling quicker diagnosis and treatment.
Initial attempts to build sensors into clothes and an exoskeleton to measure vitals were unsuccessful due to noise and interference from external factors. Decorte learned a valuable lesson about the importance of precision and accuracy in health monitoring solutions.
The innovative tech has the potential to revolutionize healthcare, especially during the current AI boom, where data is a major bottleneck. Decorte's solution could bridge the gap between data collection and diagnosis, enabling doctors to treat patients more effectively.
@Ai_Events
.
👍3
The artificial intelligence-powered search engine is one of the fastest-growing generative AI apps since ChatGPT, despite controversy over its data-gathering techniques.
Source
@Ai_Events
Source
@Ai_Events
👍2
MIT Researchers Propose AI-Proof Personhood Credentials
Artificial intelligence agents are becoming increasingly advanced, making it difficult to distinguish between AI-powered users and real humans online. To address this issue, researchers from MIT, OpenAI, Microsoft, and other institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online while preserving their privacy.
Personhood credentials would allow users to prove they are human without revealing any sensitive information about their identity. To obtain one, users would need to show up in person or have a relationship with the government, such as a tax ID number.
The proposal aims to combat the risks associated with advanced AI capabilities, including the ability to create fake content, algorithmically amplify content, and spread misinformation. If implemented, personhood credentials could help filter out certain content and moderate online interactions.
However, there are risks associated with personhood credentials, including the concentration of power and potential stifling of free expression. To mitigate these risks, the proposal suggests implementing personhood credentials in a way that ensures a variety of issuers and an open protocol for maximum freedom of expression.
@Ai_Events
.
Artificial intelligence agents are becoming increasingly advanced, making it difficult to distinguish between AI-powered users and real humans online. To address this issue, researchers from MIT, OpenAI, Microsoft, and other institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online while preserving their privacy.
Personhood credentials would allow users to prove they are human without revealing any sensitive information about their identity. To obtain one, users would need to show up in person or have a relationship with the government, such as a tax ID number.
The proposal aims to combat the risks associated with advanced AI capabilities, including the ability to create fake content, algorithmically amplify content, and spread misinformation. If implemented, personhood credentials could help filter out certain content and moderate online interactions.
However, there are risks associated with personhood credentials, including the concentration of power and potential stifling of free expression. To mitigate these risks, the proposal suggests implementing personhood credentials in a way that ensures a variety of issuers and an open protocol for maximum freedom of expression.
@Ai_Events
.
👍1
Iranian Hackers Target Presidential Campaign, Microsoft Reports
For the third presidential election in a row, foreign hacking of the campaigns has begun in earnest. This time, it's the Iranians, not the Russians, making the first significant move. Microsoft released a report stating that a hacking group run by the Iranian intelligence unit had successfully breached the account of a former senior adviser to a presidential campaign.
From that account, the group sent fake email messages, known as 'spear phishing,' to a high-ranking official of a presidential campaign in an effort to break into the campaign's own accounts and databases. While it is unclear what, if anything, the Iranian group was able to achieve, the events of the past few days may well portend a more intense period of foreign interference in the race.
The Iranians have a clear motive to see President Trump defeated, as he withdrew from the 2015 nuclear deal, reimposed economic sanctions on Iran, and ordered the killing of Maj. Gen. Qassim Suleimani, the commander of the Quds Force. The Iranian Revolutionary Guard Corps appears determined to avenge Suleimani's death and has been accused of trying to hire a hit man to assassinate political figures in the US, including Mr. Trump.
The hack and the assassination attempt give the former president an obvious foil, and he is using it to make the case that the Iranians would prefer a continuation of the Biden-Harris administration. Microsoft stopped short of saying that the hacking effort it detected was focused on Mr. Trump's campaign, though the campaign itself said that was the case.
The effort is similar in technique to what Iran attempted when it sought to interfere in the 2020 presidential campaign, but this time it appears to have been more sophisticated, suggesting the hackers learned something from what the Russians accomplished in past campaigns.
Source
@Ai_Events
For the third presidential election in a row, foreign hacking of the campaigns has begun in earnest. This time, it's the Iranians, not the Russians, making the first significant move. Microsoft released a report stating that a hacking group run by the Iranian intelligence unit had successfully breached the account of a former senior adviser to a presidential campaign.
From that account, the group sent fake email messages, known as 'spear phishing,' to a high-ranking official of a presidential campaign in an effort to break into the campaign's own accounts and databases. While it is unclear what, if anything, the Iranian group was able to achieve, the events of the past few days may well portend a more intense period of foreign interference in the race.
The Iranians have a clear motive to see President Trump defeated, as he withdrew from the 2015 nuclear deal, reimposed economic sanctions on Iran, and ordered the killing of Maj. Gen. Qassim Suleimani, the commander of the Quds Force. The Iranian Revolutionary Guard Corps appears determined to avenge Suleimani's death and has been accused of trying to hire a hit man to assassinate political figures in the US, including Mr. Trump.
The hack and the assassination attempt give the former president an obvious foil, and he is using it to make the case that the Iranians would prefer a continuation of the Biden-Harris administration. Microsoft stopped short of saying that the hacking effort it detected was focused on Mr. Trump's campaign, though the campaign itself said that was the case.
The effort is similar in technique to what Iran attempted when it sought to interfere in the 2020 presidential campaign, but this time it appears to have been more sophisticated, suggesting the hackers learned something from what the Russians accomplished in past campaigns.
Source
@Ai_Events