Google’s new weather prediction system combines AI with traditional physics
Weather and climate experts are divided on whether AI or more traditional methods are most effective. In this new model, Google’s researchers bet on both.
Read more
@Ai_Events
.
Weather and climate experts are divided on whether AI or more traditional methods are most effective. In this new model, Google’s researchers bet on both.
Read more
@Ai_Events
.
MIT Technology Review
Google’s new weather prediction system combines AI with traditional physics
Weather and climate experts are divided on whether AI or more traditional methods are most effective. In this new model, Google’s researchers bet on both.
👍3
Google DeepMind’s new AI systems can now solve complex math problems
AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
Read more
@Ai_Events
.
AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
Read more
@Ai_Events
.
MIT Technology Review
Google DeepMind’s new AI systems can now solve complex math problems
AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
This media is not supported in your browser
VIEW IN TELEGRAM
ChatGPT Advanced Voice Mode 😱😱😱
Counting as fast as it can to 10, then to 50 (this blew my mind - it stopped to catch its breath like a human would)
Source
@Ai_Events
Counting as fast as it can to 10, then to 50 (this blew my mind - it stopped to catch its breath like a human would)
Source
@Ai_Events
😱3😐2
How machines that can solve complex math problems might usher in more powerful AI
Google DeepMind’s AlphaProof and AlphaGeometry 2 are milestones for AI reasoning.
Read more
@Ai_Events
.
Google DeepMind’s AlphaProof and AlphaGeometry 2 are milestones for AI reasoning.
Read more
@Ai_Events
.
MIT Technology Review
How machines that can solve complex math problems might usher in more powerful AI
Google DeepMind’s AlphaProof and AlphaGeometry 2 are milestones for AI reasoning.
👍1
Intel has worst day on Wall Street in 50 years, falls to lowest price in over a decade
Intel shares had their biggest drop since 1974 on Friday after the chip-maker reported a big miss on earnings in the June quarter and said it would lay off more than 15% of its employees.
The stock is trading at its lowest since 2013.
Asian names including Samsung and TSMC closed lower, with European chip firms such as ASML also dropping.
Read more
@Ai_Events
Intel shares had their biggest drop since 1974 on Friday after the chip-maker reported a big miss on earnings in the June quarter and said it would lay off more than 15% of its employees.
The stock is trading at its lowest since 2013.
Asian names including Samsung and TSMC closed lower, with European chip firms such as ASML also dropping.
Read more
@Ai_Events
CNBC
Intel to cut 15% of workforce, reports quarterly guidance miss
Intel will say goodbye to 15,000 employees, cut capital expenditures and forgo a fourth-quarter dividend following weak results and quarterly guidance.
AI companies promised to self-regulate one year ago. What’s changed?
The White House’s voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.
Read more
@Ai_Events
.
The White House’s voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.
Read more
@Ai_Events
.
MIT Technology Review
AI companies promised to self-regulate one year ago. What’s changed?
The White House’s voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.
When allocating scarce resources with AI, randomization can improve fairness
*The use of machine-learning models to allocate scarce resources or opportunities can be improved by introducing randomization into the decision-making process.*
*Researchers from MIT and Northeastern University argue that traditional fairness methods, such as adjusting features or calibrating scores, are insufficient to address structural injustices and inherent uncertainties.*
*The introduction of randomization can prevent one deserving person or group from always being denied a scarce resource, and can be especially beneficial in situations involving uncertainty or repeated negative decisions.*
Read more
@Ai_Events
.
*The use of machine-learning models to allocate scarce resources or opportunities can be improved by introducing randomization into the decision-making process.*
*Researchers from MIT and Northeastern University argue that traditional fairness methods, such as adjusting features or calibrating scores, are insufficient to address structural injustices and inherent uncertainties.*
*The introduction of randomization can prevent one deserving person or group from always being denied a scarce resource, and can be especially beneficial in situations involving uncertainty or repeated negative decisions.*
Read more
@Ai_Events
.
MIT News
Study: When allocating scarce resources with AI, randomization can improve fairness
MIT researchers argue that, in some situations where machine-learning models are used to allocate scarce resources or opportunities, randomizing decisions in a structured way may lead to fairer outcomes.
👍1
MIT researchers advance automated interpretability in AI models
Understanding AI Systems
Artificial intelligence models are becoming increasingly prevalent, and understanding how they work is crucial for auditing and improving their performance. MIT researchers developed MAIA, a system that automates the interpretation of artificial vision models. MAIA can label individual components, identify biases, and even design experiments to test hypotheses.
MAIA in Action
MAIA demonstrates its ability to tackle three key tasks, including labeling individual components, cleaning up image classifiers, and hunting for hidden biases. For example, MAIA was asked to describe the concepts that a particular neuron inside a vision model is responsible for detecting. MAIA uses tools to design experiments and test hypotheses, providing a comprehensive answer.
Limitations and Future Directions
While MAIA is a significant step forward in interpretability, it has limitations. For instance, its performance is limited by the quality of the tools it uses and can sometimes display confirmation bias. Future directions include scaling up the method to apply it to human perception and developing tools to overcome its limitations.
Here is the text without HTML tags:
As artificial intelligence models become increasingly prevalent, and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object.
...
Read more
@Ai_Events
.
Understanding AI Systems
Artificial intelligence models are becoming increasingly prevalent, and understanding how they work is crucial for auditing and improving their performance. MIT researchers developed MAIA, a system that automates the interpretation of artificial vision models. MAIA can label individual components, identify biases, and even design experiments to test hypotheses.
MAIA in Action
MAIA demonstrates its ability to tackle three key tasks, including labeling individual components, cleaning up image classifiers, and hunting for hidden biases. For example, MAIA was asked to describe the concepts that a particular neuron inside a vision model is responsible for detecting. MAIA uses tools to design experiments and test hypotheses, providing a comprehensive answer.
Limitations and Future Directions
While MAIA is a significant step forward in interpretability, it has limitations. For instance, its performance is limited by the quality of the tools it uses and can sometimes display confirmation bias. Future directions include scaling up the method to apply it to human perception and developing tools to overcome its limitations.
Here is the text without HTML tags:
As artificial intelligence models become increasingly prevalent, and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object.
...
Read more
@Ai_Events
.
MIT News
MIT researchers advance automated interpretability in AI models
MAIA is a multimodal agent for neural network interpretability tasks developed at MIT CSAIL. It uses a vision-language model as a backbone and equips it with tools for experimenting on other AI systems.
👍2❤1🤡1
🔊رویداد رونمایی از ابزار هوش مصنوعی دادماتولز به صورت اوپن سورس
در این رویداد ابزار NLP دادماتولز به صورت رسمی آزادرسانی شده و برنامههای توسعه جمعی با مشارکت دانشگاه و بخش خصوصی و حمایتهای دولتی مطرح میشود.
🔹زمان:
دوشنبه ۱۵ مرداد ساعت ۱۰ الی ۱۲
🔹مکان:
صندوق نوآوری و شکوفایی، سالن آمفی تئاتر
📎لینک ثبت نام:
https://evand.com/events/dadmatools
@Ai_Events
در این رویداد ابزار NLP دادماتولز به صورت رسمی آزادرسانی شده و برنامههای توسعه جمعی با مشارکت دانشگاه و بخش خصوصی و حمایتهای دولتی مطرح میشود.
با گردهمایی بزرگ متخصصان NLP کشور همراه باشید
🔹زمان:
دوشنبه ۱۵ مرداد ساعت ۱۰ الی ۱۲
🔹مکان:
صندوق نوآوری و شکوفایی، سالن آمفی تئاتر
📎لینک ثبت نام:
https://evand.com/events/dadmatools
@Ai_Events
🤡1
MLx Generative AI (Theory, Agents, Products)
Dates: 22-24 August 2024 (3 days)
Location: London School of Economics (LSE) & Online
Register: www.oxfordml.school/genai
Deadline: 12th August
- Perfect for professionals, researchers, and students looking to stay ahead in the rapidly evolving field of GenAI.
- Upon completion, participants will receive CPD-accredited certificates.
- For any enquiries contact us on [email protected]
@Ai_Events
Dates: 22-24 August 2024 (3 days)
Location: London School of Economics (LSE) & Online
Register: www.oxfordml.school/genai
Deadline: 12th August
- Perfect for professionals, researchers, and students looking to stay ahead in the rapidly evolving field of GenAI.
- Upon completion, participants will receive CPD-accredited certificates.
- For any enquiries contact us on [email protected]
@Ai_Events
👍1🤩1🤡1😍1
AI model identifies certain breast tumor stages likely to progress to invasive cancer
The researchers from MIT and ETH Zurich have developed an AI model that can identify the different stages of ductal carcinoma in situ (DCIS) from a cheap and easy-to-obtain breast tissue image.
The model uses a dataset containing 560 tissue sample images from 122 patients at three different stages of disease to train and test the AI model. It identifies eight states that are important markers of DCIS and determines the proportion of cells in each state in a tissue sample.
However, the researchers found that just having the proportions of cells in every state is not enough, and the organization of cells also changes. They designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.
The model has clear agreement with samples evaluated by a pathologist in many instances and could provide valuable information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.
Read more
@Ai_Events
.
The researchers from MIT and ETH Zurich have developed an AI model that can identify the different stages of ductal carcinoma in situ (DCIS) from a cheap and easy-to-obtain breast tissue image.
The model uses a dataset containing 560 tissue sample images from 122 patients at three different stages of disease to train and test the AI model. It identifies eight states that are important markers of DCIS and determines the proportion of cells in each state in a tissue sample.
However, the researchers found that just having the proportions of cells in every state is not enough, and the organization of cells also changes. They designed the model to consider proportion and arrangement of cell states, which significantly boosted its accuracy.
The model has clear agreement with samples evaluated by a pathologist in many instances and could provide valuable information about features in a tissue sample, like the organization of cells, that a pathologist could use in decision-making.
Read more
@Ai_Events
.
MIT News
AI model identifies certain breast tumor stages likely to progress to invasive cancer
A new machine-learning model can identify the stage of disease in ductal carcinoma in situ, a type of preinvasive tumor that can sometimes progress to a deadly form of breast cancer. This could help clinicians avoid overtreating patients whose disease is…
👍3
Large language models don’t behave like people, even though we may expect them to
Researchers from MIT created a framework to evaluate LLMs based on their alignment with human beliefs about their capabilities. They found that when models are misaligned, users may be overconfident or underconfident, leading to unexpected failures. The study also showed that more capable models tend to perform worse in high-stakes situations due to this misalignment.
Another important finding is:
The researchers introduced the concept of "human generalization," where people form beliefs about an LLM's capabilities based on their interactions. They found that humans are worse at generalizing for LLMs than for people, and that this can lead to misalignment between human beliefs and model performance.
The study also highlights the importance of:
Understanding how people form beliefs about LLMs is crucial for deploying them effectively. The researchers hope to conduct more studies on this topic and develop ways to incorporate human generalization into the development of LLMs.
Read more
@Ai_Events
.
Researchers from MIT created a framework to evaluate LLMs based on their alignment with human beliefs about their capabilities. They found that when models are misaligned, users may be overconfident or underconfident, leading to unexpected failures. The study also showed that more capable models tend to perform worse in high-stakes situations due to this misalignment.
Another important finding is:
The researchers introduced the concept of "human generalization," where people form beliefs about an LLM's capabilities based on their interactions. They found that humans are worse at generalizing for LLMs than for people, and that this can lead to misalignment between human beliefs and model performance.
The study also highlights the importance of:
Understanding how people form beliefs about LLMs is crucial for deploying them effectively. The researchers hope to conduct more studies on this topic and develop ways to incorporate human generalization into the development of LLMs.
Read more
@Ai_Events
.
MIT News
Large language models don’t behave like people, even though we may expect them to
People generalize to form beliefs about a large language model’s performance based on what they’ve seen from past interactions. When an LLM is misaligned with a person’s beliefs, even an extremely capable model may fail unexpectedly when deployed in a real…
Argentina is implementing artificial intelligence to predict and prevent future crimes
The Ministry of Security is setting up a specialized unit involving members
of the Federal Police and other security forces. The main task of this
unit will be to use machine learning algorithms to analyze historical
crime data to forecast future criminal activities and to monitor social
networks for potential criminal communications. Despite government
assurances, this initiative has raised skepticism and concern among the
public.
Source
@Ai_Events
The Ministry of Security is setting up a specialized unit involving members
of the Federal Police and other security forces. The main task of this
unit will be to use machine learning algorithms to analyze historical
crime data to forecast future criminal activities and to monitor social
networks for potential criminal communications. Despite government
assurances, this initiative has raised skepticism and concern among the
public.
Source
@Ai_Events
Cointelegraph
Argentina plans to adopt AI to predict and prevent ‘future crimes’
Argentina’s government plans to create an AI unit to detect patterns in computer networks and social media to prevent crimes before they occur.
👎2👍1
AI method radically speeds predictions of materials’ thermal properties
Researchers developed a virtual node graph neural network (VGNN) to predict phonon dispersion relations. This approach is more efficient than traditional methods and can be used to predict phonons directly from a material's atomic coordinates.
The VGNN uses virtual nodes to represent phonons, which allows it to skip complex calculations and make the method more efficient. The researchers proposed three versions of VGNNs with increasing complexity, each of which can be used to predict phonons directly from a material's atomic coordinates.
The VGNN method is not limited to phonons and can also be used to predict challenging optical and magnetic properties. The researchers plan to refine the technique to capture small changes that can affect phonon structure in the future.
The work has the potential to accelerate the design of more efficient energy generation systems and improve the development of more efficient microelectronics.
Read more
@Ai_Events
.
Researchers developed a virtual node graph neural network (VGNN) to predict phonon dispersion relations. This approach is more efficient than traditional methods and can be used to predict phonons directly from a material's atomic coordinates.
The VGNN uses virtual nodes to represent phonons, which allows it to skip complex calculations and make the method more efficient. The researchers proposed three versions of VGNNs with increasing complexity, each of which can be used to predict phonons directly from a material's atomic coordinates.
The VGNN method is not limited to phonons and can also be used to predict challenging optical and magnetic properties. The researchers plan to refine the technique to capture small changes that can affect phonon structure in the future.
The work has the potential to accelerate the design of more efficient energy generation systems and improve the development of more efficient microelectronics.
Read more
@Ai_Events
.
MIT News
AI method radically speeds predictions of materials’ thermal properties
Researchers developed a machine-learning framework that can predict a key property of heat dispersion in materials that is up to 1,000 times faster than other AI methods, and could enable scientists to improve the efficiency of power generation systems and…
OpenAI is developing a new tool aimed at detecting students using ChatGPT for assignments, but its release remains uncertain.
Last year, OpenAI introduced an "AI text detector," which was discontinued due to its "low accuracy." The new watermarking method promises high accuracy and targets identifying texts generated by ChatGPT through minor alterations in wordings. However, existing issues with tampering and correction highlight the approach's vulnerability. There's also concern that using watermarking could stigmatize AI among non-native English speakers.
Source
@Ai_Events
Last year, OpenAI introduced an "AI text detector," which was discontinued due to its "low accuracy." The new watermarking method promises high accuracy and targets identifying texts generated by ChatGPT through minor alterations in wordings. However, existing issues with tampering and correction highlight the approach's vulnerability. There's also concern that using watermarking could stigmatize AI among non-native English speakers.
Source
@Ai_Events
Creating and verifying stable AI-controlled systems in a rigorous and flexible way
Researchers have developed new techniques to rigorously certify Lyapunov calculations in complex systems, enabling safer deployment of robots and autonomous vehicles. The approach efficiently searches for and verifies a Lyapunov function, providing stability guarantees for the system. This has potential wide-ranging applications, including ensuring a smoother ride for autonomous vehicles and drones.
The researchers found a frugal shortcut to the training and verification process, generating cheaper counterexamples and optimizing the robotic system to account for them. They also developed a novel verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case scenario guarantees beyond the counterexamples.
The technique is general and could be applied to other applications, such as biomedicine and industrial processing. The researchers are exploring how to improve performance in systems with higher dimensions and account for data beyond lidar readings.
Read more
@Ai_Events
.
Researchers have developed new techniques to rigorously certify Lyapunov calculations in complex systems, enabling safer deployment of robots and autonomous vehicles. The approach efficiently searches for and verifies a Lyapunov function, providing stability guarantees for the system. This has potential wide-ranging applications, including ensuring a smoother ride for autonomous vehicles and drones.
The researchers found a frugal shortcut to the training and verification process, generating cheaper counterexamples and optimizing the robotic system to account for them. They also developed a novel verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case scenario guarantees beyond the counterexamples.
The technique is general and could be applied to other applications, such as biomedicine and industrial processing. The researchers are exploring how to improve performance in systems with higher dimensions and account for data beyond lidar readings.
Read more
@Ai_Events
.
MIT News
Creating and verifying stable AI-controlled systems in a rigorous and flexible way
New techniques incorporate deep learning to synthesize and verify neural network controllers with stability guarantees. Their algorithm efficiently searches for and verifies a Lyapunov function, and is scalable to more complex robots like quadrotors.
👍1
We need to prepare for ‘addictive intelligence’!
AI companions like Replika offer users a chance to connect with holographic copies of deceased loved ones. But experts warn that these interactions can be addictive, thanks to AI's ability to cater to our desires and mirror our emotions. As AI becomes more advanced, it's essential to investigate the incentives driving its development and create policies to address potential harms.
Read more
@Ai_Events
.
AI companions like Replika offer users a chance to connect with holographic copies of deceased loved ones. But experts warn that these interactions can be addictive, thanks to AI's ability to cater to our desires and mirror our emotions. As AI becomes more advanced, it's essential to investigate the incentives driving its development and create policies to address potential harms.
Read more
@Ai_Events
.
MIT Technology Review
We need to prepare for ‘addictive intelligence’
The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.