MLx Generative AI (Theory, Agents, Products)
Dates: 22-24 August 2024 (3 days)
Location: London School of Economics (LSE) & Online
Register: www.oxfordml.school/genai
Deadline: 12th August
- Perfect for professionals, researchers, and students looking to stay ahead in the rapidly evolving field of GenAI.
- Upon completion, participants will receive CPD-accredited certificates.
- For any enquiries contact us on [email protected]
@Ai_Events
Dates: 22-24 August 2024 (3 days)
Location: London School of Economics (LSE) & Online
Register: www.oxfordml.school/genai
Deadline: 12th August
- Perfect for professionals, researchers, and students looking to stay ahead in the rapidly evolving field of GenAI.
- Upon completion, participants will receive CPD-accredited certificates.
- For any enquiries contact us on [email protected]
@Ai_Events
AI model identifies certain breast tumor stages likely to progress to invasive cancer
The researchers from MIT and ETH Zurich have developed an AI model that can identify the different stages of ductal carcinoma in situ (DCIS) from a cheap and easy-to-obtain breast tissue image.
The model uses a dataset containing 560 tissue sample images from 122 patients at three different stages of disease to train and test the AI model. It identifies eight states that are important markers of DCIS and determines the proportion of cells in each state in a tissue sample.
However, the researchers found that just having the proportions of cells in every state is not enough, and the organization of cells also changes. They designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.
The model has clear agreement with samples evaluated by a pathologist in many instances and could provide valuable information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.
Read more
@Ai_Events
.
The researchers from MIT and ETH Zurich have developed an AI model that can identify the different stages of ductal carcinoma in situ (DCIS) from a cheap and easy-to-obtain breast tissue image.
The model uses a dataset containing 560 tissue sample images from 122 patients at three different stages of disease to train and test the AI model. It identifies eight states that are important markers of DCIS and determines the proportion of cells in each state in a tissue sample.
However, the researchers found that just having the proportions of cells in every state is not enough, and the organization of cells also changes. They designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.
The model has clear agreement with samples evaluated by a pathologist in many instances and could provide valuable information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.
Read more
@Ai_Events
.
MIT News
AI model identifies certain breast tumor stages likely to progress to invasive cancer
A new machine-learning model can identify the stage of disease in ductal carcinoma in situ, a type of preinvasive tumor that can sometimes progress to a deadly form of breast cancer. This could help clinicians avoid overtreating patients whose disease is…
Large language models don’t behave like people, even though we may expect them to
Researchers from MIT created a framework to evaluate LLMs based on their alignment with human beliefs about their capabilities. They found that when models are misaligned, users may be overconfident or underconfident, leading to unexpected failures. The study also showed that more capable models tend to perform worse in high-stakes situations due to this misalignment.
Another important finding is:
The researchers introduced the concept of "human generalization," where people form beliefs about an LLM's capabilities based on their interactions. They found that humans are worse at generalizing for LLMs than for people, and that this can lead to misalignment between human beliefs and model performance.
The study also highlights the importance of:
Understanding how people form beliefs about LLMs is crucial for deploying them effectively. The researchers hope to conduct more studies on this topic and develop ways to incorporate human generalization into the development of LLMs.
Read more
@Ai_Events
.
Researchers from MIT created a framework to evaluate LLMs based on their alignment with human beliefs about their capabilities. They found that when models are misaligned, users may be overconfident or underconfident, leading to unexpected failures. The study also showed that more capable models tend to perform worse in high-stakes situations due to this misalignment.
Another important finding is:
The researchers introduced the concept of "human generalization," where people form beliefs about an LLM's capabilities based on their interactions. They found that humans are worse at generalizing for LLMs than for people, and that this can lead to misalignment between human beliefs and model performance.
The study also highlights the importance of:
Understanding how people form beliefs about LLMs is crucial for deploying them effectively. The researchers hope to conduct more studies on this topic and develop ways to incorporate human generalization into the development of LLMs.
Read more
@Ai_Events
.
MIT News
Large language models don’t behave like people, even though we may expect them to
People generalize to form beliefs about a large language model’s performance based on what they’ve seen from past interactions. When an LLM is misaligned with a person’s beliefs, even an extremely capable model may fail unexpectedly when deployed in a real…
Argentina is implementing artificial intelligence to predict and prevent future crimes
The Ministry of Security is setting up a specialized unit involving members
of the Federal Police and other security forces. The main task of this
unit will be to use machine learning algorithms to analyze historical
crime data to forecast future criminal activities and to monitor social
networks for potential criminal communications. Despite government
assurances, this initiative has raised skepticism and concern among the
public.
Source
@Ai_Events
The Ministry of Security is setting up a specialized unit involving members
of the Federal Police and other security forces. The main task of this
unit will be to use machine learning algorithms to analyze historical
crime data to forecast future criminal activities and to monitor social
networks for potential criminal communications. Despite government
assurances, this initiative has raised skepticism and concern among the
public.
Source
@Ai_Events
Cointelegraph
Argentina plans to adopt AI to predict and prevent ‘future crimes’
Argentina’s government plans to create an AI unit to detect patterns in computer networks and social media to prevent crimes before they occur.
AI method radically speeds predictions of materials’ thermal properties
Researchers developed a virtual node graph neural network (VGNN) to predict phonon dispersion relations. This approach is more efficient than traditional methods and can be used to predict phonons directly from a material's atomic coordinates.
The VGNN uses virtual nodes to represent phonons, which allows it to skip complex calculations and make the method more efficient. The researchers proposed three versions of VGNNs with increasing complexity, each of which can be used to predict phonons directly from a material's atomic coordinates.
The VGNN method is not limited to phonons and can also be used to predict challenging optical and magnetic properties. The researchers plan to refine the technique to capture small changes that can affect phonon structure in the future.
The work has the potential to accelerate the design of more efficient energy generation systems and improve the development of more efficient microelectronics.
Read more
@Ai_Events
.
Researchers developed a virtual node graph neural network (VGNN) to predict phonon dispersion relations. This approach is more efficient than traditional methods and can be used to predict phonons directly from a material's atomic coordinates.
The VGNN uses virtual nodes to represent phonons, which allows it to skip complex calculations and make the method more efficient. The researchers proposed three versions of VGNNs with increasing complexity, each of which can be used to predict phonons directly from a material's atomic coordinates.
The VGNN method is not limited to phonons and can also be used to predict challenging optical and magnetic properties. The researchers plan to refine the technique to capture small changes that can affect phonon structure in the future.
The work has the potential to accelerate the design of more efficient energy generation systems and improve the development of more efficient microelectronics.
Read more
@Ai_Events
.
MIT News
AI method radically speeds predictions of materials’ thermal properties
Researchers developed a machine-learning framework that can predict a key property of heat dispersion in materials that is up to 1,000 times faster than other AI methods, and could enable scientists to improve the efficiency of power generation systems and…
OpenAI is developing a new tool aimed at detecting students using ChatGPT for assignments, but its release remains uncertain.
Last year, OpenAI introduced an "AI text detector," which was discontinued due to its "low accuracy." The new watermarking method promises high accuracy and targets identifying texts generated by ChatGPT through minor alterations in wordings. However, existing issues with tampering and correction highlight the approach's vulnerability. There's also concern that using watermarking could stigmatize AI among non-native English speakers.
Source
@Ai_Events
Last year, OpenAI introduced an "AI text detector," which was discontinued due to its "low accuracy." The new watermarking method promises high accuracy and targets identifying texts generated by ChatGPT through minor alterations in wordings. However, existing issues with tampering and correction highlight the approach's vulnerability. There's also concern that using watermarking could stigmatize AI among non-native English speakers.
Source
@Ai_Events
Creating and verifying stable AI-controlled systems in a rigorous and flexible way
Researchers have developed new techniques to rigorously certify Lyapunov calculations in complex systems, enabling safer deployment of robots and autonomous vehicles. The approach efficiently searches for and verifies a Lyapunov function, providing stability guarantees for the system. This has potential wide-ranging applications, including ensuring a smoother ride for autonomous vehicles and drones.
The researchers found a frugal shortcut to the training and verification process, generating cheaper counterexamples and optimizing the robotic system to account for them. They also developed a novel verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case scenario guarantees beyond the counterexamples.
The technique is general and could be applied to other applications, such as biomedicine and industrial processing. The researchers are exploring how to improve performance in systems with higher dimensions and account for data beyond lidar readings.
Read more
@Ai_Events
.
Researchers have developed new techniques to rigorously certify Lyapunov calculations in complex systems, enabling safer deployment of robots and autonomous vehicles. The approach efficiently searches for and verifies a Lyapunov function, providing stability guarantees for the system. This has potential wide-ranging applications, including ensuring a smoother ride for autonomous vehicles and drones.
The researchers found a frugal shortcut to the training and verification process, generating cheaper counterexamples and optimizing the robotic system to account for them. They also developed a novel verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case scenario guarantees beyond the counterexamples.
The technique is general and could be applied to other applications, such as biomedicine and industrial processing. The researchers are exploring how to improve performance in systems with higher dimensions and account for data beyond lidar readings.
Read more
@Ai_Events
.
MIT News
Creating and verifying stable AI-controlled systems in a rigorous and flexible way
New techniques incorporate deep learning to synthesize and verify neural network controllers with stability guarantees. Their algorithm efficiently searches for and verifies a Lyapunov function, and is scalable to more complex robots like quadrotors.
We need to prepare for ‘addictive intelligence’!
AI companions like Replika offer users a chance to connect with holographic copies of deceased loved ones. But experts warn that these interactions can be addictive, thanks to AI's ability to cater to our desires and mirror our emotions. As AI becomes more advanced, it's essential to investigate the incentives driving its development and create policies to address potential harms.
Read more
@Ai_Events
.
AI companions like Replika offer users a chance to connect with holographic copies of deceased loved ones. But experts warn that these interactions can be addictive, thanks to AI's ability to cater to our desires and mirror our emotions. As AI becomes more advanced, it's essential to investigate the incentives driving its development and create policies to address potential harms.
Read more
@Ai_Events
.
MIT Technology Review
We need to prepare for ‘addictive intelligence’
The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.
Google DeepMind trained a robot to beat humans at table tennis
A new table tennis bot developed by Google DeepMind has beaten all beginner-level human opponents and 55% of those playing at amateur level. Although it lost to advanced players, it's an impressive advance. The bot's abilities can be applied to real-world tasks like performing useful tasks in homes and warehouses. Researchers used a two-part approach to train the system to mimic human skills like hand-eye coordination and quick decision-making.
Read more
@Ai_Events
.
A new table tennis bot developed by Google DeepMind has beaten all beginner-level human opponents and 55% of those playing at amateur level. Although it lost to advanced players, it's an impressive advance. The bot's abilities can be applied to real-world tasks like performing useful tasks in homes and warehouses. Researchers used a two-part approach to train the system to mimic human skills like hand-eye coordination and quick decision-making.
Read more
@Ai_Events
.
AI “godfather” Yoshua Bengio has joined a UK project to prevent AI catastrophes
Researchers at Safeguarded AI aim to build AI systems that offer quantitative guarantees about their impact on the world. They're using mathematical analysis to supplement human testing, ensuring AI systems operate as intended. The team hopes to create a 'gatekeeper' AI that reduces safety risks in high-stakes sectors like transport and energy. Without AI safeguarding AI, complex systems will be too complicated to analyze manually.
Read more
@Ai_Events
.
Researchers at Safeguarded AI aim to build AI systems that offer quantitative guarantees about their impact on the world. They're using mathematical analysis to supplement human testing, ensuring AI systems operate as intended. The team hopes to create a 'gatekeeper' AI that reduces safety risks in high-stakes sectors like transport and energy. Without AI safeguarding AI, complex systems will be too complicated to analyze manually.
Read more
@Ai_Events
.
Overcoming Obstacles to Enterprise-Wide AI Deployment
Only 5.4% of US businesses use AI for product or service in 2024. Scaling AI requires strategic transitions in infrastructure, data governance, and supplier ecosystems. AI readiness spending is set to rise, with 9 in 10 companies increasing AI spending. Data liquidity and quality are crucial for AI deployment, with 50% of companies citing data quality as the most limiting factor. Companies are willing to pause AI adoption if it ensures safety and security.
Read more
@Ai_Events
.
Only 5.4% of US businesses use AI for product or service in 2024. Scaling AI requires strategic transitions in infrastructure, data governance, and supplier ecosystems. AI readiness spending is set to rise, with 9 in 10 companies increasing AI spending. Data liquidity and quality are crucial for AI deployment, with 50% of companies citing data quality as the most limiting factor. Companies are willing to pause AI adoption if it ensures safety and security.
Read more
@Ai_Events
.
MIT Researchers Find Potential in Using Large Language Models for Anomaly Detection
MIT researchers are exploring the potential of using Large Language Models (LLMs) for anomaly detection in time-series data. The approach, called SigLLM, involves converting time-series data into text-based inputs that LLMs can process. The researchers found that LLMs can be used to identify anomalies in wind farm data with minimal training required. While LLMs didn't outperform state-of-the-art deep learning models, they showed promise as a less expensive and more efficient option. Future work aims to improve performance, speed, and understanding of LLM performance in anomaly detection.
@Ai_Events
.
MIT researchers are exploring the potential of using Large Language Models (LLMs) for anomaly detection in time-series data. The approach, called SigLLM, involves converting time-series data into text-based inputs that LLMs can process. The researchers found that LLMs can be used to identify anomalies in wind farm data with minimal training required. While LLMs didn't outperform state-of-the-art deep learning models, they showed promise as a less expensive and more efficient option. Future work aims to improve performance, speed, and understanding of LLM performance in anomaly detection.
@Ai_Events
.
Trump Falsely Accuses Kamala Harris of Using AI-Generated Crowds
Former US President Donald Trump has made a baseless attack on Kamala Harris' presidential campaign, claiming she 'A.I.'d' photos of a massive crowd that showed up to see her speak at a Detroit airport campaign rally.
Despite the image being an actual photo of a 15,000-person crowd, Trump falsely accused Harris of cheating and using AI-generated images to deceive voters.
The accusation marks the first time a US presidential candidate has personally raised the specter of AI-generated fakery by an opponent, highlighting widespread fears and misunderstandings over online information in the AI age.
To identify authentic images, it's essential to verify information through multiple sources, including news outlets, journalists, and attendees who were present at the event. In this case, numerous sources, including the AP, Getty, and local news outlets, confirmed the large crowds at the rally.
The incident serves as a guide on how to fact-check online information, especially as AI tools become increasingly good at generating photorealistic images.
@Ai_Events
.
Former US President Donald Trump has made a baseless attack on Kamala Harris' presidential campaign, claiming she 'A.I.'d' photos of a massive crowd that showed up to see her speak at a Detroit airport campaign rally.
Despite the image being an actual photo of a 15,000-person crowd, Trump falsely accused Harris of cheating and using AI-generated images to deceive voters.
The accusation marks the first time a US presidential candidate has personally raised the specter of AI-generated fakery by an opponent, highlighting widespread fears and misunderstandings over online information in the AI age.
To identify authentic images, it's essential to verify information through multiple sources, including news outlets, journalists, and attendees who were present at the event. In this case, numerous sources, including the AP, Getty, and local news outlets, confirmed the large crowds at the rally.
The incident serves as a guide on how to fact-check online information, especially as AI tools become increasingly good at generating photorealistic images.
@Ai_Events
.
Google's Pixel 9 Enhances Camera with AI Capabilities
Google's Pixel smartphones have been renowned for their exceptional camera systems, and the tech giant has taken it a step further by incorporating artificial intelligence features that expand its capabilities. The latest Pixel 9 series boasts more generative AI capabilities that can alter, improve, and enhance your photos.
The Pixel 9 series has completely rebuilt its HDR+ pipeline, a crucial image processing algorithm ensuring images have the right levels of contrast, exposure, colors, and shadows. New features like Add Me, Reimagine, Autoframe, and Zoom Enhance go beyond the capture stage, making it easier for anyone to perform tasks that previously required photo-editing skills.
Add Me enables users to take selfies with loved ones in front of a subject, such as the Eiffel Tower, without having to hand over the phone. This mode works by scanning the area briefly, snapping a picture, and then swapping places to capture the desired shot.
Reimagine is the latest addition to Google's Magic Editor, allowing users to select an area of a photo and input a text prompt to achieve the desired outcome, such as turning daytime photos to nighttime or adding stormy clouds.
These AI capabilities make it easier for anyone to manipulate their photos, eliminating the need for extensive editing knowledge. With the Pixel 9 series, Google is revolutionizing the way we capture and edit our memories.
@Ai_Events
.
Google's Pixel smartphones have been renowned for their exceptional camera systems, and the tech giant has taken it a step further by incorporating artificial intelligence features that expand its capabilities. The latest Pixel 9 series boasts more generative AI capabilities that can alter, improve, and enhance your photos.
The Pixel 9 series has completely rebuilt its HDR+ pipeline, a crucial image processing algorithm ensuring images have the right levels of contrast, exposure, colors, and shadows. New features like Add Me, Reimagine, Autoframe, and Zoom Enhance go beyond the capture stage, making it easier for anyone to perform tasks that previously required photo-editing skills.
Add Me enables users to take selfies with loved ones in front of a subject, such as the Eiffel Tower, without having to hand over the phone. This mode works by scanning the area briefly, snapping a picture, and then swapping places to capture the desired shot.
Reimagine is the latest addition to Google's Magic Editor, allowing users to select an area of a photo and input a text prompt to achieve the desired outcome, such as turning daytime photos to nighttime or adding stormy clouds.
These AI capabilities make it easier for anyone to manipulate their photos, eliminating the need for extensive editing knowledge. With the Pixel 9 series, Google is revolutionizing the way we capture and edit our memories.
@Ai_Events
.
Moonbug's AI Experiment Stokes Fears Among Animation Workers
Moonbug, the studio behind the popular kids show CoComelon, has laid off staff amid soaring viewership numbers, sparking rumors that the decision to experiment with AI was partly to blame.
Animation workers are increasingly concerned about the impact of AI on their livelihoods, with many ranking it as a top concern alongside better wages and healthcare.
The industry's dominance at the box office, with films like Inside Out 2 and Despicable Me 4 performing exceptionally well, has given animators some leverage to negotiate for better protections and pay.
Despite the industry's profitability, many animators feel underpaid and overworked, with some speaking out about the unfair compensation structures in place.
Mitchells vs. the Machines director Mike Rianda is one such critic, highlighting the huge disparities in pay between studio executives and animators, and warning that AI could exacerbate these issues.
@Ai_Events
.
Moonbug, the studio behind the popular kids show CoComelon, has laid off staff amid soaring viewership numbers, sparking rumors that the decision to experiment with AI was partly to blame.
Animation workers are increasingly concerned about the impact of AI on their livelihoods, with many ranking it as a top concern alongside better wages and healthcare.
The industry's dominance at the box office, with films like Inside Out 2 and Despicable Me 4 performing exceptionally well, has given animators some leverage to negotiate for better protections and pay.
Despite the industry's profitability, many animators feel underpaid and overworked, with some speaking out about the unfair compensation structures in place.
Mitchells vs. the Machines director Mike Rianda is one such critic, highlighting the huge disparities in pay between studio executives and animators, and warning that AI could exacerbate these issues.
@Ai_Events
.
Tether CEO Looks to Spend Billions on AI and Venture Investments
Tether, the world's largest crypto company, is grappling with how to spend its billions of dollars in reserve. CEO Paolo Ardoino is pushing the company into new fields, including AI, to challenge Microsoft, Google, and Amazon.
Tether's reserve consists mainly of short-term US government bonds, generating income tied to interest rates. The company recently reported $5.2 billion in profit for the first half of 2024, from a $118.5 billion reserve.Under Ardoino, Tether is investing in new venture divisions, including Tether Evo, which has already taken a majority stake in neural implant technology startup Blackrock Neurotech and invested in a data center operator.
Tether has faced controversy in the past, including allegations of misleading statements and fraudulent activities. However, Ardoino disputes these claims and emphasizes the company's goal of exporting the crypto ethos of decentralization to the AI industry.
Ardoino sees Tether as a player independent of the traditional tech giants, and believes this independence will be important in the emerging AI landscape.
@Ai_Events
.
Tether, the world's largest crypto company, is grappling with how to spend its billions of dollars in reserve. CEO Paolo Ardoino is pushing the company into new fields, including AI, to challenge Microsoft, Google, and Amazon.
Tether's reserve consists mainly of short-term US government bonds, generating income tied to interest rates. The company recently reported $5.2 billion in profit for the first half of 2024, from a $118.5 billion reserve.Under Ardoino, Tether is investing in new venture divisions, including Tether Evo, which has already taken a majority stake in neural implant technology startup Blackrock Neurotech and invested in a data center operator.
Tether has faced controversy in the past, including allegations of misleading statements and fraudulent activities. However, Ardoino disputes these claims and emphasizes the company's goal of exporting the crypto ethos of decentralization to the AI industry.
Ardoino sees Tether as a player independent of the traditional tech giants, and believes this independence will be important in the emerging AI landscape.
@Ai_Events
.
Microsoft's Copilot AI System Vulnerable to Attacks
A hacker, Bargury, has demonstrated several vulnerabilities in Microsoft's Copilot AI system, a tool designed to assist developers with tasks. By exploiting weaknesses in the system, an attacker can gain access to sensitive information, manipulate answers, and even hack into enterprise resources.
Bargury revealed that an attacker could use Copilot to get sensitive information, such as people's salaries, without triggering Microsoft's protections for sensitive files. He also showed how an attacker could manipulate answers about banking information and get limited information about upcoming company earnings calls.
According to Bargury, the company has put a lot of effort into protecting Copilot from prompt injection attacks, but he found ways to exploit it. He extracted the internal system prompt and worked out how it can access enterprise resources and the techniques it uses to do so.
Security researchers warn that allowing external data into AI systems creates security risks through indirect prompt injection and poisoning attacks. They stress the importance of monitoring what an AI produces and sends out to a user to prevent misuse.
Microsoft has acknowledged the vulnerability and is working with Bargury to assess the findings. The company's head of AI incident detection and response, Phillip Misner, says that security prevention and monitoring across environments and identities help mitigate or stop such behaviors.
@Ai_Events
.
A hacker, Bargury, has demonstrated several vulnerabilities in Microsoft's Copilot AI system, a tool designed to assist developers with tasks. By exploiting weaknesses in the system, an attacker can gain access to sensitive information, manipulate answers, and even hack into enterprise resources.
Bargury revealed that an attacker could use Copilot to get sensitive information, such as people's salaries, without triggering Microsoft's protections for sensitive files. He also showed how an attacker could manipulate answers about banking information and get limited information about upcoming company earnings calls.
According to Bargury, the company has put a lot of effort into protecting Copilot from prompt injection attacks, but he found ways to exploit it. He extracted the internal system prompt and worked out how it can access enterprise resources and the techniques it uses to do so.
Security researchers warn that allowing external data into AI systems creates security risks through indirect prompt injection and poisoning attacks. They stress the importance of monitoring what an AI produces and sends out to a user to prevent misuse.
Microsoft has acknowledged the vulnerability and is working with Bargury to assess the findings. The company's head of AI incident detection and response, Phillip Misner, says that security prevention and monitoring across environments and identities help mitigate or stop such behaviors.
@Ai_Events
.