In a recent interview, Jeff Dean, a leading researcher at Google, shared some insightful views that are relevant to both AI development and
environmental concerns. Firstly, he criticized the idea that just
scaling up AI models and datasets will lead to major breakthroughs in
AI. Instead, he suggests that a qualitative leap is necessary, the
nature of which is still unknown.
Regarding environmental concerns related to AI, Jeff Dean addressed the criticism about Google's increasing carbon footprint due to AI research. He clarified that:
- The transition to renewable energy sources is inherently incremental: new large-scale energy capacities must be built and integrated, and while energy consumption is steadily increasing, the carbon footprint can initially rise but is expected to drop sharply once new energy solutions are operational.
- AI's energy consumption is a small fraction of the total energy usage by tech giants, which is dominated by traditional services for billions of users and cloud services for businesses globally. The challenge, therefore, is not to make AI specifically "clean" in terms of energy but to improve the energy efficiency of data centers at large.
Source
@Ai_Events
environmental concerns. Firstly, he criticized the idea that just
scaling up AI models and datasets will lead to major breakthroughs in
AI. Instead, he suggests that a qualitative leap is necessary, the
nature of which is still unknown.
Regarding environmental concerns related to AI, Jeff Dean addressed the criticism about Google's increasing carbon footprint due to AI research. He clarified that:
- The transition to renewable energy sources is inherently incremental: new large-scale energy capacities must be built and integrated, and while energy consumption is steadily increasing, the carbon footprint can initially rise but is expected to drop sharply once new energy solutions are operational.
- AI's energy consumption is a small fraction of the total energy usage by tech giants, which is dominated by traditional services for billions of users and cloud services for businesses globally. The challenge, therefore, is not to make AI specifically "clean" in terms of energy but to improve the energy efficiency of data centers at large.
Source
@Ai_Events
Elon Musk tweeted that they started training on the newly built supercluster X.AI in Memphis, Tennessee.
This data center is equipped with 100,000 H100 GPUs, which is a substantial number compared to META's recently launched clusters of 24,576 GPUs each, and GPT-4 was rumored to have been trained on 25,000 cards.
Source
@Ai_Events
This data center is equipped with 100,000 H100 GPUs, which is a substantial number compared to META's recently launched clusters of 24,576 GPUs each, and GPT-4 was rumored to have been trained on 25,000 cards.
Source
@Ai_Events
رویداد رونمایی از ابزار هوش مصنوعی دادماتولز به صورت اوپن سورس
در این رویداد ابزار NLP دادماتولز به صورت رسمی آزادرسانی شده و برنامههای توسعه جمعی با مشارکت دانشگاه و بخش خصوصی و حمایتهای دولتی مطرح میشود.
⏳زمان:
دوشنبه ۱۵ مرداد ساعت ۱۰ الی ۱۲
📍مکان:
صندوق نوآوری و شکوفایی، سالن آمفی تئاتر
📎لینک ثبت نام:
https://evand.com/events/dadmatools
@Ai_Events
در این رویداد ابزار NLP دادماتولز به صورت رسمی آزادرسانی شده و برنامههای توسعه جمعی با مشارکت دانشگاه و بخش خصوصی و حمایتهای دولتی مطرح میشود.
⏳زمان:
دوشنبه ۱۵ مرداد ساعت ۱۰ الی ۱۲
📍مکان:
صندوق نوآوری و شکوفایی، سالن آمفی تئاتر
📎لینک ثبت نام:
https://evand.com/events/dadmatools
@Ai_Events
Meta AI released a new version of the Llama model today
The new version - 3.1 comes with a new model of 405B parameters and an upgrade of the previous versions - the 70B and 8B models. According to the release notes, the new model capabilities are similar to the GPT-4, GPT-4o, and Claude 3.5 Sonnet models.
Release notes: https://ai.meta.com/blog/meta-llama-3-1/
Model on Hugging Face: https://huggingface.co/meta-llama/Meta-Llama-3.1-405B
@Ai_Events
The new version - 3.1 comes with a new model of 405B parameters and an upgrade of the previous versions - the 70B and 8B models. According to the release notes, the new model capabilities are similar to the GPT-4, GPT-4o, and Claude 3.5 Sonnet models.
Release notes: https://ai.meta.com/blog/meta-llama-3-1/
Model on Hugging Face: https://huggingface.co/meta-llama/Meta-Llama-3.1-405B
@Ai_Events
Ai Events️
Meta AI released a new version of the Llama model today The new version - 3.1 comes with a new model of 405B parameters and an upgrade of the previous versions - the 70B and 8B models. According to the release notes, the new model capabilities are similar…
This media is not supported in your browser
VIEW IN TELEGRAM
Ai Events️
Elon Musk tweeted that they started training on the newly built supercluster X.AI in Memphis, Tennessee. This data center is equipped with 100,000 H100 GPUs, which is a substantial number compared to META's recently launched clusters of 24,576 GPUs each,…
This media is not supported in your browser
VIEW IN TELEGRAM
Elon Musk says xAI's new supercomputer in Memphis was installed in just 19 days and will be used to train Grok 3, which is expected by December, and will be the most powerful AI in the world
Source
@Ai_Events
Source
@Ai_Events
This media is not supported in your browser
VIEW IN TELEGRAM
Huge announcement from Meta. Welcome Llama 3.1!
This is all you need to know about it:
The new models:
- The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models in 8B, 70B, and 405B sizes (text in/text out).
- All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA).
- Optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
- Llama 3.1 is an auto-regressive language model with an optimized transformer architecture, using SFT and RLHF for alignment. Its core LLM architecture is the same dense structure as Llama 3 for text input and output.
- Tool use, Llama 3.1 Instruct Model (Text) is fine-tuned for tool use, enabling it to generate tool calls for search, image generation, code execution, and mathematical reasoning, and also supports zero-shot tool use.
@Ai_Events
This is all you need to know about it:
The new models:
- The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models in 8B, 70B, and 405B sizes (text in/text out).
- All models support long context length (128k) and are optimized for inference with support for grouped query attention (GQA).
- Optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.
- Llama 3.1 is an auto-regressive language model with an optimized transformer architecture, using SFT and RLHF for alignment. Its core LLM architecture is the same dense structure as Llama 3 for text input and output.
- Tool use, Llama 3.1 Instruct Model (Text) is fine-tuned for tool use, enabling it to generate tool calls for search, image generation, code execution, and mathematical reasoning, and also supports zero-shot tool use.
@Ai_Events
تاکنــولــوژی (TalKnowlogy)
اولین رویداد سالانه گفتوگو درباره تکنولوژی
باحضور سخنرانانی از هاروارد، مایکروسافت، Tesla و ...
۱۲ سخنرانی جذاب و کاربردی در زمینه فناوریهای روز دنیا از زبان متخصصین برترین شرکتهای دنیا و اساتید و دانشجویان دانشگاههای مطرح جهان
شرکت در این رویداد بینالمللی برای همه علاقمندان و فرهیختگان آزاد و رایگان است
برای ثبت نام اینجا کلیک کنید
@ElectricalEng_Association
@Ai_Events
اولین رویداد سالانه گفتوگو درباره تکنولوژی
باحضور سخنرانانی از هاروارد، مایکروسافت، Tesla و ...
۱۲ سخنرانی جذاب و کاربردی در زمینه فناوریهای روز دنیا از زبان متخصصین برترین شرکتهای دنیا و اساتید و دانشجویان دانشگاههای مطرح جهان
شرکت در این رویداد بینالمللی برای همه علاقمندان و فرهیختگان آزاد و رایگان است
برای ثبت نام اینجا کلیک کنید
@ElectricalEng_Association
@Ai_Events
Google’s new weather prediction system combines AI with traditional physics
Weather and climate experts are divided on whether AI or more traditional methods are most effective. In this new model, Google’s researchers bet on both.
Read more
@Ai_Events
.
Weather and climate experts are divided on whether AI or more traditional methods are most effective. In this new model, Google’s researchers bet on both.
Read more
@Ai_Events
.
MIT Technology Review
Google’s new weather prediction system combines AI with traditional physics
Weather and climate experts are divided on whether AI or more traditional methods are most effective. In this new model, Google’s researchers bet on both.
Google DeepMind’s new AI systems can now solve complex math problems
AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
Read more
@Ai_Events
.
AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
Read more
@Ai_Events
.
MIT Technology Review
Google DeepMind’s new AI systems can now solve complex math problems
AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.
This media is not supported in your browser
VIEW IN TELEGRAM
ChatGPT Advanced Voice Mode 😱😱😱
Counting as fast as it can to 10, then to 50 (this blew my mind - it stopped to catch its breath like a human would)
Source
@Ai_Events
Counting as fast as it can to 10, then to 50 (this blew my mind - it stopped to catch its breath like a human would)
Source
@Ai_Events
How machines that can solve complex math problems might usher in more powerful AI
Google DeepMind’s AlphaProof and AlphaGeometry 2 are milestones for AI reasoning.
Read more
@Ai_Events
.
Google DeepMind’s AlphaProof and AlphaGeometry 2 are milestones for AI reasoning.
Read more
@Ai_Events
.
MIT Technology Review
How machines that can solve complex math problems might usher in more powerful AI
Google DeepMind’s AlphaProof and AlphaGeometry 2 are milestones for AI reasoning.
Intel has worst day on Wall Street in 50 years, falls to lowest price in over a decade
Intel shares had their biggest drop since 1974 on Friday after the chip-maker reported a big miss on earnings in the June quarter and said it would lay off more than 15% of its employees.
The stock is trading at its lowest since 2013.
Asian names including Samsung and TSMC closed lower, with European chip firms such as ASML also dropping.
Read more
@Ai_Events
Intel shares had their biggest drop since 1974 on Friday after the chip-maker reported a big miss on earnings in the June quarter and said it would lay off more than 15% of its employees.
The stock is trading at its lowest since 2013.
Asian names including Samsung and TSMC closed lower, with European chip firms such as ASML also dropping.
Read more
@Ai_Events
CNBC
Intel to cut 15% of workforce, reports quarterly guidance miss
Intel will say goodbye to 15,000 employees, cut capital expenditures and forgo a fourth-quarter dividend following weak results and quarterly guidance.
AI companies promised to self-regulate one year ago. What’s changed?
The White House’s voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.
Read more
@Ai_Events
.
The White House’s voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.
Read more
@Ai_Events
.
MIT Technology Review
AI companies promised to self-regulate one year ago. What’s changed?
The White House’s voluntary AI commitments have brought better red-teaming practices and watermarks, but no meaningful transparency or accountability.
When allocating scarce resources with AI, randomization can improve fairness
*The use of machine-learning models to allocate scarce resources or opportunities can be improved by introducing randomization into the decision-making process.*
*Researchers from MIT and Northeastern University argue that traditional fairness methods, such as adjusting features or calibrating scores, are insufficient to address structural injustices and inherent uncertainties.*
*The introduction of randomization can prevent one deserving person or group from always being denied a scarce resource, and can be especially beneficial in situations involving uncertainty or repeated negative decisions.*
Read more
@Ai_Events
.
*The use of machine-learning models to allocate scarce resources or opportunities can be improved by introducing randomization into the decision-making process.*
*Researchers from MIT and Northeastern University argue that traditional fairness methods, such as adjusting features or calibrating scores, are insufficient to address structural injustices and inherent uncertainties.*
*The introduction of randomization can prevent one deserving person or group from always being denied a scarce resource, and can be especially beneficial in situations involving uncertainty or repeated negative decisions.*
Read more
@Ai_Events
.
MIT News
Study: When allocating scarce resources with AI, randomization can improve fairness
MIT researchers argue that, in some situations where machine-learning models are used to allocate scarce resources or opportunities, randomizing decisions in a structured way may lead to fairer outcomes.
MIT researchers advance automated interpretability in AI models
Understanding AI Systems
Artificial intelligence models are becoming increasingly prevalent, and understanding how they work is crucial for auditing and improving their performance. MIT researchers developed MAIA, a system that automates the interpretation of artificial vision models. MAIA can label individual components, identify biases, and even design experiments to test hypotheses.
MAIA in Action
MAIA demonstrates its ability to tackle three key tasks, including labeling individual components, cleaning up image classifiers, and hunting for hidden biases. For example, MAIA was asked to describe the concepts that a particular neuron inside a vision model is responsible for detecting. MAIA uses tools to design experiments and test hypotheses, providing a comprehensive answer.
Limitations and Future Directions
While MAIA is a significant step forward in interpretability, it has limitations. For instance, its performance is limited by the quality of the tools it uses and can sometimes display confirmation bias. Future directions include scaling up the method to apply it to human perception and developing tools to overcome its limitations.
Here is the text without HTML tags:
As artificial intelligence models become increasingly prevalent, and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object.
...
Read more
@Ai_Events
.
Understanding AI Systems
Artificial intelligence models are becoming increasingly prevalent, and understanding how they work is crucial for auditing and improving their performance. MIT researchers developed MAIA, a system that automates the interpretation of artificial vision models. MAIA can label individual components, identify biases, and even design experiments to test hypotheses.
MAIA in Action
MAIA demonstrates its ability to tackle three key tasks, including labeling individual components, cleaning up image classifiers, and hunting for hidden biases. For example, MAIA was asked to describe the concepts that a particular neuron inside a vision model is responsible for detecting. MAIA uses tools to design experiments and test hypotheses, providing a comprehensive answer.
Limitations and Future Directions
While MAIA is a significant step forward in interpretability, it has limitations. For instance, its performance is limited by the quality of the tools it uses and can sometimes display confirmation bias. Future directions include scaling up the method to apply it to human perception and developing tools to overcome its limitations.
Here is the text without HTML tags:
As artificial intelligence models become increasingly prevalent, and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object.
...
Read more
@Ai_Events
.
MIT News
MIT researchers advance automated interpretability in AI models
MAIA is a multimodal agent for neural network interpretability tasks developed at MIT CSAIL. It uses a vision-language model as a backbone and equips it with tools for experimenting on other AI systems.