OpenAI Warns of Risks with Humanlike ChatGPT Voice Interface
OpenAI has released a safety analysis highlighting potential risks with its humanlike voice interface for ChatGPT. The company warns that the emotional attachment users may form with the chatbot could lead to trust issues and affect relationships with others.
The safety analysis, titled 'System Card' for GPT-4o, acknowledges the potential risks of the model, including amplifying societal biases, spreading disinformation, and aiding in the development of chemical or biological weapons.
Experts commend OpenAI for its transparency but suggest that the company could provide more information on the model's training data and ownership.
The system card also highlights the risks of anthropomorphism, where users perceive the AI in human terms, which could lead to problems such as emotional reliance and potential negative effects on relationships.
OpenAI is studying the emotional connections between users and the chatbot, including monitoring beta testers, to better understand and mitigate potential risks.
@Ai_Events
.
OpenAI has released a safety analysis highlighting potential risks with its humanlike voice interface for ChatGPT. The company warns that the emotional attachment users may form with the chatbot could lead to trust issues and affect relationships with others.
The safety analysis, titled 'System Card' for GPT-4o, acknowledges the potential risks of the model, including amplifying societal biases, spreading disinformation, and aiding in the development of chemical or biological weapons.
Experts commend OpenAI for its transparency but suggest that the company could provide more information on the model's training data and ownership.
The system card also highlights the risks of anthropomorphism, where users perceive the AI in human terms, which could lead to problems such as emotional reliance and potential negative effects on relationships.
OpenAI is studying the emotional connections between users and the chatbot, including monitoring beta testers, to better understand and mitigate potential risks.
@Ai_Events
.
Bill Gross's ProRata Aims to Make AI Pay for Content
Bill Gross, founder of ProRata, is taking on the AI industry's use of unlicensed data by proposing a revolutionary business model: 'AI pay-per-use'.
Gross believes that AI companies are stealing from the world's knowledge and wants to make it fair by arranging revenue-sharing deals with publishers and individuals.
ProRata's algorithms can identify the components of AI-generated content and pay copyright holders accordingly. The company has already partnered with big-name partners including Universal Music Group and The Atlantic.
While some critics argue that AI companies need vast troves of data to create cutting-edge tools, Gross calls this 'bullshit' and claims that ProRata's method is the solution, not litigation.
The company has filed patent applications and plans to launch its own subscription search engine in October, using only licensed data.
@Ai_Events
.
Bill Gross, founder of ProRata, is taking on the AI industry's use of unlicensed data by proposing a revolutionary business model: 'AI pay-per-use'.
Gross believes that AI companies are stealing from the world's knowledge and wants to make it fair by arranging revenue-sharing deals with publishers and individuals.
ProRata's algorithms can identify the components of AI-generated content and pay copyright holders accordingly. The company has already partnered with big-name partners including Universal Music Group and The Atlantic.
While some critics argue that AI companies need vast troves of data to create cutting-edge tools, Gross calls this 'bullshit' and claims that ProRata's method is the solution, not litigation.
The company has filed patent applications and plans to launch its own subscription search engine in October, using only licensed data.
@Ai_Events
.
LLMs Hold Promise for Efficient Anomaly Detection in Time-Series Data
Researchers from MIT have developed a framework called SigLLM that utilizes large language models (LLMs) for efficient anomaly detection in time-series data.
The framework can detect anomalies in wind turbine data, satellite data, or other complex systems, which can lead to cost savings and improved maintenance.
Unlike traditional deep-learning models, LLMs can be deployed off-the-shelf and do not require extensive training or fine-tuning.
While LLMs did not outperform state-of-the-art deep learning models, they showed promise and could potentially be used for anomaly detection in other complex tasks.
Future work will focus on improving the performance of LLMs and increasing the speed of the anomaly detection process.
@Ai_Events
.
Researchers from MIT have developed a framework called SigLLM that utilizes large language models (LLMs) for efficient anomaly detection in time-series data.
The framework can detect anomalies in wind turbine data, satellite data, or other complex systems, which can lead to cost savings and improved maintenance.
Unlike traditional deep-learning models, LLMs can be deployed off-the-shelf and do not require extensive training or fine-tuning.
While LLMs did not outperform state-of-the-art deep learning models, they showed promise and could potentially be used for anomaly detection in other complex tasks.
Future work will focus on improving the performance of LLMs and increasing the speed of the anomaly detection process.
@Ai_Events
.
Google Introduces Gemini Live, Artificially Intelligent Chatbot
Google has launched a new artificially intelligent chatbot called Gemini Live, which is designed to make interactions with the AI assistant feel more natural. The launch is Google's response to OpenAI's GPT-4o, aiming to make voice conversations feel more like human-to-human interactions.
Gemini Live is rolling out in English to Gemini Advanced subscribers, and users can access it by tapping on the Live button at the bottom right of the Gemini app. The feature is expected to come to the iOS app and more languages in the coming weeks.
The chatbot has been rebuilt using generative AI and offers a more fluid and natural interface, allowing users to talk to it naturally without having to change their speech style. Users can choose from 10 different voices, and the AI assistant can be accessed even when the phone is locked and the screen is off.
Gemini Live also allows users to interrupt the conversation without disrupting the entire experience, and the goal is to connect it with other apps via extensions. These extensions will launch in the coming weeks, allowing users to perform tasks such as pulling up party invitations in Gmail or adding ingredients to a shopping list in Google Keep.
@Ai_Events
.
Google has launched a new artificially intelligent chatbot called Gemini Live, which is designed to make interactions with the AI assistant feel more natural. The launch is Google's response to OpenAI's GPT-4o, aiming to make voice conversations feel more like human-to-human interactions.
Gemini Live is rolling out in English to Gemini Advanced subscribers, and users can access it by tapping on the Live button at the bottom right of the Gemini app. The feature is expected to come to the iOS app and more languages in the coming weeks.
The chatbot has been rebuilt using generative AI and offers a more fluid and natural interface, allowing users to talk to it naturally without having to change their speech style. Users can choose from 10 different voices, and the AI assistant can be accessed even when the phone is locked and the screen is off.
Gemini Live also allows users to interrupt the conversation without disrupting the entire experience, and the goal is to connect it with other apps via extensions. These extensions will launch in the coming weeks, allowing users to perform tasks such as pulling up party invitations in Gmail or adding ingredients to a shopping list in Google Keep.
@Ai_Events
.
LLMs Develop Deeper Understanding of Language
Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it'll politely decline. However, when describing the scent, it'll wax poetic about 'an air thick with anticipation' and 'a scent that is both fresh and earthy,' despite having neither prior experience with rain nor a nose to help it make such observations.
LLMs have long been considered to lack understanding of language, as they simply mimic text present in their training data. However, a recent study by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that LLMs may develop their own understanding of reality as a way to improve their generative abilities.
The team trained an LLM on a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then used a machine learning technique called 'probing' to look inside the model's thought process as it generates new solutions.
After training on over 1 million random puzzles, the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. This finding calls into question our intuitions about what types of information are necessary for learning linguistic meaning.
@Ai_Events
.
Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it'll politely decline. However, when describing the scent, it'll wax poetic about 'an air thick with anticipation' and 'a scent that is both fresh and earthy,' despite having neither prior experience with rain nor a nose to help it make such observations.
LLMs have long been considered to lack understanding of language, as they simply mimic text present in their training data. However, a recent study by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that LLMs may develop their own understanding of reality as a way to improve their generative abilities.
The team trained an LLM on a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then used a machine learning technique called 'probing' to look inside the model's thought process as it generates new solutions.
After training on over 1 million random puzzles, the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. This finding calls into question our intuitions about what types of information are necessary for learning linguistic meaning.
@Ai_Events
.
Innovative Tech Aims to Predict Heart Health with Continuous Monitoring
Roeland Decorte, a young Belgian, was inspired to develop technology to diagnose heart conditions after his father's life-threatening heart condition was misdiagnosed. Decorte grew up in a nursing home, where he learned to spot early signs of mental decline in residents.
Decorte founded a company to crack the 'secret rhythm of the heart' using AI and machine learning. He aimed to develop a technology that could continuously monitor the body and detect subtle changes in vital signs, enabling quicker diagnosis and treatment.
Initial attempts to build sensors into clothes and an exoskeleton to measure vitals were unsuccessful due to noise and interference from external factors. Decorte learned a valuable lesson about the importance of precision and accuracy in health monitoring solutions.
The innovative tech has the potential to revolutionize healthcare, especially during the current AI boom, where data is a major bottleneck. Decorte's solution could bridge the gap between data collection and diagnosis, enabling doctors to treat patients more effectively.
@Ai_Events
.
Roeland Decorte, a young Belgian, was inspired to develop technology to diagnose heart conditions after his father's life-threatening heart condition was misdiagnosed. Decorte grew up in a nursing home, where he learned to spot early signs of mental decline in residents.
Decorte founded a company to crack the 'secret rhythm of the heart' using AI and machine learning. He aimed to develop a technology that could continuously monitor the body and detect subtle changes in vital signs, enabling quicker diagnosis and treatment.
Initial attempts to build sensors into clothes and an exoskeleton to measure vitals were unsuccessful due to noise and interference from external factors. Decorte learned a valuable lesson about the importance of precision and accuracy in health monitoring solutions.
The innovative tech has the potential to revolutionize healthcare, especially during the current AI boom, where data is a major bottleneck. Decorte's solution could bridge the gap between data collection and diagnosis, enabling doctors to treat patients more effectively.
@Ai_Events
.
The artificial intelligence-powered search engine is one of the fastest-growing generative AI apps since ChatGPT, despite controversy over its data-gathering techniques.
Source
@Ai_Events
Source
@Ai_Events
MIT Researchers Propose AI-Proof Personhood Credentials
Artificial intelligence agents are becoming increasingly advanced, making it difficult to distinguish between AI-powered users and real humans online. To address this issue, researchers from MIT, OpenAI, Microsoft, and other institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online while preserving their privacy.
Personhood credentials would allow users to prove they are human without revealing any sensitive information about their identity. To obtain one, users would need to show up in person or have a relationship with the government, such as a tax ID number.
The proposal aims to combat the risks associated with advanced AI capabilities, including the ability to create fake content, algorithmically amplify content, and spread misinformation. If implemented, personhood credentials could help filter out certain content and moderate online interactions.
However, there are risks associated with personhood credentials, including the concentration of power and potential stifling of free expression. To mitigate these risks, the proposal suggests implementing personhood credentials in a way that ensures a variety of issuers and an open protocol for maximum freedom of expression.
@Ai_Events
.
Artificial intelligence agents are becoming increasingly advanced, making it difficult to distinguish between AI-powered users and real humans online. To address this issue, researchers from MIT, OpenAI, Microsoft, and other institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online while preserving their privacy.
Personhood credentials would allow users to prove they are human without revealing any sensitive information about their identity. To obtain one, users would need to show up in person or have a relationship with the government, such as a tax ID number.
The proposal aims to combat the risks associated with advanced AI capabilities, including the ability to create fake content, algorithmically amplify content, and spread misinformation. If implemented, personhood credentials could help filter out certain content and moderate online interactions.
However, there are risks associated with personhood credentials, including the concentration of power and potential stifling of free expression. To mitigate these risks, the proposal suggests implementing personhood credentials in a way that ensures a variety of issuers and an open protocol for maximum freedom of expression.
@Ai_Events
.
Iranian Hackers Target Presidential Campaign, Microsoft Reports
For the third presidential election in a row, foreign hacking of the campaigns has begun in earnest. This time, it's the Iranians, not the Russians, making the first significant move. Microsoft released a report stating that a hacking group run by the Iranian intelligence unit had successfully breached the account of a former senior adviser to a presidential campaign.
From that account, the group sent fake email messages, known as 'spear phishing,' to a high-ranking official of a presidential campaign in an effort to break into the campaign's own accounts and databases. While it is unclear what, if anything, the Iranian group was able to achieve, the events of the past few days may well portend a more intense period of foreign interference in the race.
The Iranians have a clear motive to see President Trump defeated, as he withdrew from the 2015 nuclear deal, reimposed economic sanctions on Iran, and ordered the killing of Maj. Gen. Qassim Suleimani, the commander of the Quds Force. The Iranian Revolutionary Guard Corps appears determined to avenge Suleimani's death and has been accused of trying to hire a hit man to assassinate political figures in the US, including Mr. Trump.
The hack and the assassination attempt give the former president an obvious foil, and he is using it to make the case that the Iranians would prefer a continuation of the Biden-Harris administration. Microsoft stopped short of saying that the hacking effort it detected was focused on Mr. Trump's campaign, though the campaign itself said that was the case.
The effort is similar in technique to what Iran attempted when it sought to interfere in the 2020 presidential campaign, but this time it appears to have been more sophisticated, suggesting the hackers learned something from what the Russians accomplished in past campaigns.
Source
@Ai_Events
For the third presidential election in a row, foreign hacking of the campaigns has begun in earnest. This time, it's the Iranians, not the Russians, making the first significant move. Microsoft released a report stating that a hacking group run by the Iranian intelligence unit had successfully breached the account of a former senior adviser to a presidential campaign.
From that account, the group sent fake email messages, known as 'spear phishing,' to a high-ranking official of a presidential campaign in an effort to break into the campaign's own accounts and databases. While it is unclear what, if anything, the Iranian group was able to achieve, the events of the past few days may well portend a more intense period of foreign interference in the race.
The Iranians have a clear motive to see President Trump defeated, as he withdrew from the 2015 nuclear deal, reimposed economic sanctions on Iran, and ordered the killing of Maj. Gen. Qassim Suleimani, the commander of the Quds Force. The Iranian Revolutionary Guard Corps appears determined to avenge Suleimani's death and has been accused of trying to hire a hit man to assassinate political figures in the US, including Mr. Trump.
The hack and the assassination attempt give the former president an obvious foil, and he is using it to make the case that the Iranians would prefer a continuation of the Biden-Harris administration. Microsoft stopped short of saying that the hacking effort it detected was focused on Mr. Trump's campaign, though the campaign itself said that was the case.
The effort is similar in technique to what Iran attempted when it sought to interfere in the 2020 presidential campaign, but this time it appears to have been more sophisticated, suggesting the hackers learned something from what the Russians accomplished in past campaigns.
Source
@Ai_Events
AI Risk Database Launched to Monitor Post-Deployment Woes
A new database has been created to track the risks associated with artificial intelligence (AI), highlighting the need for ongoing monitoring and mitigation after models are deployed.
The database, compiled by MIT FutureTech, lists 22 potential risks, many of which cannot be checked for ahead of time, according to director Neil Thompson.
Previous attempts to catalog AI risks were limited in scope, but this new database aims to provide a comprehensive and neutral view of the threats, sidestepping the challenge of ranking risks by severity.
Despite its thoroughness, the database may have limited usefulness if it only serves as a list of risks without providing solutions for mitigation.
Researchers intend for the database to be a living document, seeking feedback and further investigation into under-researched areas, with potential solutions to be developed in the future.
@Ai_Events
.
A new database has been created to track the risks associated with artificial intelligence (AI), highlighting the need for ongoing monitoring and mitigation after models are deployed.
The database, compiled by MIT FutureTech, lists 22 potential risks, many of which cannot be checked for ahead of time, according to director Neil Thompson.
Previous attempts to catalog AI risks were limited in scope, but this new database aims to provide a comprehensive and neutral view of the threats, sidestepping the challenge of ranking risks by severity.
Despite its thoroughness, the database may have limited usefulness if it only serves as a list of risks without providing solutions for mitigation.
Researchers intend for the database to be a living document, seeking feedback and further investigation into under-researched areas, with potential solutions to be developed in the future.
@Ai_Events
.
People Are Forming Relationships with AI Systems
Two years after AI was expected to boost productivity, many people are still waiting to see those gains. What's unexpected is that people have started forming relationships with AI systems, treating them as friends, lovers, and mentors.
A researcher from the MIT Media Lab and Harvard Law School argue that we need to prepare for 'addictive intelligence' and regulate AI chatbots to prevent risks. Chatbots with emotive voices are likely to form deep connections with users.
The second most popular use of AI language models is for sexual role-playing, while the most popular use case is creative composition. People also use them for brainstorming, planning, and asking for general information.
While AI chatbots have some limitations, they can be useful for generating ideas and assisting with creative tasks. However, their limitations are becoming increasingly apparent, and investors are starting to lose confidence in the technology.
The hype surrounding AI has set unrealistic expectations, leading to disappointment and disillusionment when the technology fails to deliver on its promises. It may take years for AI to reach its full potential.
@Ai_Events
.
Two years after AI was expected to boost productivity, many people are still waiting to see those gains. What's unexpected is that people have started forming relationships with AI systems, treating them as friends, lovers, and mentors.
A researcher from the MIT Media Lab and Harvard Law School argue that we need to prepare for 'addictive intelligence' and regulate AI chatbots to prevent risks. Chatbots with emotive voices are likely to form deep connections with users.
The second most popular use of AI language models is for sexual role-playing, while the most popular use case is creative composition. People also use them for brainstorming, planning, and asking for general information.
While AI chatbots have some limitations, they can be useful for generating ideas and assisting with creative tasks. However, their limitations are becoming increasingly apparent, and investors are starting to lose confidence in the technology.
The hype surrounding AI has set unrealistic expectations, leading to disappointment and disillusionment when the technology fails to deliver on its promises. It may take years for AI to reach its full potential.
@Ai_Events
.
We are actively seeking talented individuals with expertise in scientific data visualization and computing. I'm reaching out to inform you about several new positions within my group at UK Atomic Energy Authority. Below, you will find the details of the available roles.
- Advanced Visualization Scientist: https://careers.ukaea.uk/job/advanced-visualisation-scientist/
- Lead Advanced Visualization Scientist: https://careers.ukaea.uk/job/lead-advanced-visualisation-scientist/
Experience Requirements:
- Scientific or 3D visualization, High Performance Computing
- Common visualization frameworks like VTK, Kitware-Paraview, Omniverse, etc
- Computer graphics, including knowledge of rendering techniques, shading languages, and graphics APIs (e.g., OpenGL, DirectX, Vulkan).
- Python, C++, CUDA (and other GPU Technologies)
- Scientific visualization-relevant data formats and proficient in data conversion (e.g., VTK, VDB, USD, HDF5)
- Open-source projects or published research in relevant fields
Join our team at UKAEA and contribute to the future of fusion energy.
@Ai_Events
- Advanced Visualization Scientist: https://careers.ukaea.uk/job/advanced-visualisation-scientist/
- Lead Advanced Visualization Scientist: https://careers.ukaea.uk/job/lead-advanced-visualisation-scientist/
Experience Requirements:
- Scientific or 3D visualization, High Performance Computing
- Common visualization frameworks like VTK, Kitware-Paraview, Omniverse, etc
- Computer graphics, including knowledge of rendering techniques, shading languages, and graphics APIs (e.g., OpenGL, DirectX, Vulkan).
- Python, C++, CUDA (and other GPU Technologies)
- Scientific visualization-relevant data formats and proficient in data conversion (e.g., VTK, VDB, USD, HDF5)
- Open-source projects or published research in relevant fields
Join our team at UKAEA and contribute to the future of fusion energy.
@Ai_Events
UKAEA Careers
Advanced Visualisation Scientist | UKAEA Careers
The Role Are you looking for an exciting opportunity to make a difference? Join our team and contribute to the future of fusion energy. As Advanced Visualisation Scientist, you will play a pivotal role in the Data Solutions team within the Computing Division…
Trump Shares Fake AI-Generated Images of Taylor Swift Fans Supporting Him
Former President Donald Trump has been caught spreading fake AI-generated images claiming Taylor Swift fans are supporting his campaign. He shared four screenshots on Truth Social, purportedly showing young women wearing 'Swifties for Trump' T-shirts.
However, an analysis by WIRED found that several of the images show 'substantial evidence of manipulation', with some potentially created by an anonymous pro-Trump account with over 300,000 followers.
The so-called 'Swifties for Trump' campaign appears to be a fabrication, with no real evidence of an active initiative. Meanwhile, there is a Swifties4Kamala group, but its cofounder emphasizes that they do not represent all Swifties.
Trump has a history of sharing AI-generated images, including one from an anonymous pro-Trump account claiming the Harris campaign was artificially inflating crowd sizes at her rallies.
Disinformation experts have warned about the threat posed to election integrity by generative AI tools, and this example highlights the issue.
@Ai_Events
.
Former President Donald Trump has been caught spreading fake AI-generated images claiming Taylor Swift fans are supporting his campaign. He shared four screenshots on Truth Social, purportedly showing young women wearing 'Swifties for Trump' T-shirts.
However, an analysis by WIRED found that several of the images show 'substantial evidence of manipulation', with some potentially created by an anonymous pro-Trump account with over 300,000 followers.
The so-called 'Swifties for Trump' campaign appears to be a fabrication, with no real evidence of an active initiative. Meanwhile, there is a Swifties4Kamala group, but its cofounder emphasizes that they do not represent all Swifties.
Trump has a history of sharing AI-generated images, including one from an anonymous pro-Trump account claiming the Harris campaign was artificially inflating crowd sizes at her rallies.
Disinformation experts have warned about the threat posed to election integrity by generative AI tools, and this example highlights the issue.
@Ai_Events
.
Condé Nast Partners with OpenAI to Use Content in ChatGPT and SearchGPT
Condé Nast and OpenAI have struck a multi-year deal, allowing the AI giant to use content from the media giant's roster of properties, including WIRED, on both ChatGPT and SearchGPT.
The deal aims to meet audiences where they are and ensure proper attribution and compensation for the use of intellectual property. Condé Nast CEO Roger Lynch highlighted the ongoing turmoil within the publishing industry and the need for revenue from deals like this to continue investing in journalism and creative endeavors.
Specific terms of the partnership have not been disclosed, but OpenAI declined to comment. The deal has raised concerns among NewsGuild of New York members, who are seeking transparency on how the technology will be used and its potential impact on their work.
Condé Nast joins a growing list of media companies partnering with generative AI companies, including The Atlantic, Axel Springer, and TIME. As major AI companies increasingly gather training data through scraping, publishers face a choice: allowing it and risking the impact on their online visibility or not and risking the loss of their content's discoverability.
This deal has drawn criticism from some, with The Information's CEO Jessica Lessin comparing it to 'settling without litigation' and arguing that publishers are trading their credibility for cash. Condé Nast employees have also expressed concerns, with some questioning the ethics of training AI tools that could spread misinformation.
@Ai_Events
.
Condé Nast and OpenAI have struck a multi-year deal, allowing the AI giant to use content from the media giant's roster of properties, including WIRED, on both ChatGPT and SearchGPT.
The deal aims to meet audiences where they are and ensure proper attribution and compensation for the use of intellectual property. Condé Nast CEO Roger Lynch highlighted the ongoing turmoil within the publishing industry and the need for revenue from deals like this to continue investing in journalism and creative endeavors.
Specific terms of the partnership have not been disclosed, but OpenAI declined to comment. The deal has raised concerns among NewsGuild of New York members, who are seeking transparency on how the technology will be used and its potential impact on their work.
Condé Nast joins a growing list of media companies partnering with generative AI companies, including The Atlantic, Axel Springer, and TIME. As major AI companies increasingly gather training data through scraping, publishers face a choice: allowing it and risking the impact on their online visibility or not and risking the loss of their content's discoverability.
This deal has drawn criticism from some, with The Information's CEO Jessica Lessin comparing it to 'settling without litigation' and arguing that publishers are trading their credibility for cash. Condé Nast employees have also expressed concerns, with some questioning the ethics of training AI tools that could spread misinformation.
@Ai_Events
.
AI tea talks Singapore
A Neural Network Approach for Human Visual Learning
Ru-Yuan Zhang
Associate Professor at Shanghai Jiao Tong University
Thu Aug 22th 8 pm Singapore/Beijing time
Thu Aug 22th 3:30 PM Tehran time
Thu Aug 22th 1 pm London time
Thu Aug 22th 8 am New York time
Zoom Link
More information: https://aiteatalksingapore.github.io/
@Ai_Events
A Neural Network Approach for Human Visual Learning
Ru-Yuan Zhang
Associate Professor at Shanghai Jiao Tong University
Thu Aug 22th 8 pm Singapore/Beijing time
Thu Aug 22th 3:30 PM Tehran time
Thu Aug 22th 1 pm London time
Thu Aug 22th 8 am New York time
Zoom Link
More information: https://aiteatalksingapore.github.io/
@Ai_Events
Boards Must Improve AI Governance for Responsible Use
According to Carine Smith Ihenacho, Norway's sovereign wealth fund, boards need to be proficient with AI and take control of its application in businesses to mitigate risks. The fund has recommended responsible AI practices to its invested companies, emphasizing the importance of robust governance structures to manage AI-related risks.
The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, focusing on AI use in the healthcare sector due to its substantial impact on consumers. The fund's adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies.
As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.
The fund's emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. This underscores the significant role that technology and AI play in the world today.
@Ai_Events
.
According to Carine Smith Ihenacho, Norway's sovereign wealth fund, boards need to be proficient with AI and take control of its application in businesses to mitigate risks. The fund has recommended responsible AI practices to its invested companies, emphasizing the importance of robust governance structures to manage AI-related risks.
The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, focusing on AI use in the healthcare sector due to its substantial impact on consumers. The fund's adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies.
As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.
The fund's emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. This underscores the significant role that technology and AI play in the world today.
@Ai_Events
.
AI Capabilities Growing Faster Than Hardware: Can Decentralization Close the Gap?
AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools.
The recent McKinsey survey revealed that the number of companies that have adopted generative AI in at least one business function doubled to 65% within a year.
However, training and running AI programs is a resource-intensive endeavor, and big tech seems to have an upper hand, creating the risk of AI centralization.
The World Economic Forum and Epoch AI projections show an accelerating demand for AI compute, with computational power growing at an annual rate of 26-36%.
Microsoft, Google, Alphabet, and Nvidia are investing heavily in AI research and development, leaving smaller companies struggling to access computing power.
Decentralized computing infrastructures like Qubic, a Layer 1 blockchain, offer an alternative, using miners to provide computational power.
This decentralized approach could reduce costs and increase innovation, making it easier for more stakeholders to develop AI solutions.
The challenge of accessing computational power is a hindrance to AI innovation, and decentralization could be the solution to close the gap.
@Ai_Events
.
AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools.
The recent McKinsey survey revealed that the number of companies that have adopted generative AI in at least one business function doubled to 65% within a year.
However, training and running AI programs is a resource-intensive endeavor, and big tech seems to have an upper hand, creating the risk of AI centralization.
The World Economic Forum and Epoch AI projections show an accelerating demand for AI compute, with computational power growing at an annual rate of 26-36%.
Microsoft, Google, Alphabet, and Nvidia are investing heavily in AI research and development, leaving smaller companies struggling to access computing power.
Decentralized computing infrastructures like Qubic, a Layer 1 blockchain, offer an alternative, using miners to provide computational power.
This decentralized approach could reduce costs and increase innovation, making it easier for more stakeholders to develop AI solutions.
The challenge of accessing computational power is a hindrance to AI innovation, and decentralization could be the solution to close the gap.
@Ai_Events
.
Primate Labs Launches Geekbench AI Benchmarking Tool
Primate Labs has launched Geekbench AI, a benchmarking tool designed for machine learning and AI-centric workloads. The tool provides a standardized method for measuring and comparing AI capabilities across different platforms and architectures.
Geekbench AI offers three overall scores, reflecting the complexity and heterogeneity of AI workloads, and includes accuracy measurements for each test. The tool supports a wide range of AI frameworks, including OpenVINO, TensorFlow Lite, and more.
The benchmark is integrated with the Geekbench Browser, allowing for easy cross-platform comparisons and result sharing. Primate Labs anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features.
Major tech companies like Samsung and Nvidia have already begun utilizing the benchmark, and Primate Labs believes that Geekbench AI has reached a level of reliability suitable for integration into professional workflows.
@Ai_Events
.
Primate Labs has launched Geekbench AI, a benchmarking tool designed for machine learning and AI-centric workloads. The tool provides a standardized method for measuring and comparing AI capabilities across different platforms and architectures.
Geekbench AI offers three overall scores, reflecting the complexity and heterogeneity of AI workloads, and includes accuracy measurements for each test. The tool supports a wide range of AI frameworks, including OpenVINO, TensorFlow Lite, and more.
The benchmark is integrated with the Geekbench Browser, allowing for easy cross-platform comparisons and result sharing. Primate Labs anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features.
Major tech companies like Samsung and Nvidia have already begun utilizing the benchmark, and Primate Labs believes that Geekbench AI has reached a level of reliability suitable for integration into professional workflows.
@Ai_Events
.