AI Risk Database Launched to Monitor Post-Deployment Woes
A new database has been created to track the risks associated with artificial intelligence (AI), highlighting the need for ongoing monitoring and mitigation after models are deployed.
The database, compiled by MIT FutureTech, lists 22 potential risks, many of which cannot be checked for ahead of time, according to director Neil Thompson.
Previous attempts to catalog AI risks were limited in scope, but this new database aims to provide a comprehensive and neutral view of the threats, sidestepping the challenge of ranking risks by severity.
Despite its thoroughness, the database may have limited usefulness if it only serves as a list of risks without providing solutions for mitigation.
Researchers intend for the database to be a living document, seeking feedback and further investigation into under-researched areas, with potential solutions to be developed in the future.
@Ai_Events
.
A new database has been created to track the risks associated with artificial intelligence (AI), highlighting the need for ongoing monitoring and mitigation after models are deployed.
The database, compiled by MIT FutureTech, lists 22 potential risks, many of which cannot be checked for ahead of time, according to director Neil Thompson.
Previous attempts to catalog AI risks were limited in scope, but this new database aims to provide a comprehensive and neutral view of the threats, sidestepping the challenge of ranking risks by severity.
Despite its thoroughness, the database may have limited usefulness if it only serves as a list of risks without providing solutions for mitigation.
Researchers intend for the database to be a living document, seeking feedback and further investigation into under-researched areas, with potential solutions to be developed in the future.
@Ai_Events
.
People Are Forming Relationships with AI Systems
Two years after AI was expected to boost productivity, many people are still waiting to see those gains. What's unexpected is that people have started forming relationships with AI systems, treating them as friends, lovers, and mentors.
A researcher from the MIT Media Lab and Harvard Law School argue that we need to prepare for 'addictive intelligence' and regulate AI chatbots to prevent risks. Chatbots with emotive voices are likely to form deep connections with users.
The second most popular use of AI language models is for sexual role-playing, while the most popular use case is creative composition. People also use them for brainstorming, planning, and asking for general information.
While AI chatbots have some limitations, they can be useful for generating ideas and assisting with creative tasks. However, their limitations are becoming increasingly apparent, and investors are starting to lose confidence in the technology.
The hype surrounding AI has set unrealistic expectations, leading to disappointment and disillusionment when the technology fails to deliver on its promises. It may take years for AI to reach its full potential.
@Ai_Events
.
Two years after AI was expected to boost productivity, many people are still waiting to see those gains. What's unexpected is that people have started forming relationships with AI systems, treating them as friends, lovers, and mentors.
A researcher from the MIT Media Lab and Harvard Law School argue that we need to prepare for 'addictive intelligence' and regulate AI chatbots to prevent risks. Chatbots with emotive voices are likely to form deep connections with users.
The second most popular use of AI language models is for sexual role-playing, while the most popular use case is creative composition. People also use them for brainstorming, planning, and asking for general information.
While AI chatbots have some limitations, they can be useful for generating ideas and assisting with creative tasks. However, their limitations are becoming increasingly apparent, and investors are starting to lose confidence in the technology.
The hype surrounding AI has set unrealistic expectations, leading to disappointment and disillusionment when the technology fails to deliver on its promises. It may take years for AI to reach its full potential.
@Ai_Events
.
We are actively seeking talented individuals with expertise in scientific data visualization and computing. I'm reaching out to inform you about several new positions within my group at UK Atomic Energy Authority. Below, you will find the details of the available roles.
- Advanced Visualization Scientist: https://careers.ukaea.uk/job/advanced-visualisation-scientist/
- Lead Advanced Visualization Scientist: https://careers.ukaea.uk/job/lead-advanced-visualisation-scientist/
Experience Requirements:
- Scientific or 3D visualization, High Performance Computing
- Common visualization frameworks like VTK, Kitware-Paraview, Omniverse, etc
- Computer graphics, including knowledge of rendering techniques, shading languages, and graphics APIs (e.g., OpenGL, DirectX, Vulkan).
- Python, C++, CUDA (and other GPU Technologies)
- Scientific visualization-relevant data formats and proficient in data conversion (e.g., VTK, VDB, USD, HDF5)
- Open-source projects or published research in relevant fields
Join our team at UKAEA and contribute to the future of fusion energy.
@Ai_Events
- Advanced Visualization Scientist: https://careers.ukaea.uk/job/advanced-visualisation-scientist/
- Lead Advanced Visualization Scientist: https://careers.ukaea.uk/job/lead-advanced-visualisation-scientist/
Experience Requirements:
- Scientific or 3D visualization, High Performance Computing
- Common visualization frameworks like VTK, Kitware-Paraview, Omniverse, etc
- Computer graphics, including knowledge of rendering techniques, shading languages, and graphics APIs (e.g., OpenGL, DirectX, Vulkan).
- Python, C++, CUDA (and other GPU Technologies)
- Scientific visualization-relevant data formats and proficient in data conversion (e.g., VTK, VDB, USD, HDF5)
- Open-source projects or published research in relevant fields
Join our team at UKAEA and contribute to the future of fusion energy.
@Ai_Events
UKAEA Careers
Advanced Visualisation Scientist | UKAEA Careers
The Role Are you looking for an exciting opportunity to make a difference? Join our team and contribute to the future of fusion energy. As Advanced Visualisation Scientist, you will play a pivotal role in the Data Solutions team within the Computing Division…
Trump Shares Fake AI-Generated Images of Taylor Swift Fans Supporting Him
Former President Donald Trump has been caught spreading fake AI-generated images claiming Taylor Swift fans are supporting his campaign. He shared four screenshots on Truth Social, purportedly showing young women wearing 'Swifties for Trump' T-shirts.
However, an analysis by WIRED found that several of the images show 'substantial evidence of manipulation', with some potentially created by an anonymous pro-Trump account with over 300,000 followers.
The so-called 'Swifties for Trump' campaign appears to be a fabrication, with no real evidence of an active initiative. Meanwhile, there is a Swifties4Kamala group, but its cofounder emphasizes that they do not represent all Swifties.
Trump has a history of sharing AI-generated images, including one from an anonymous pro-Trump account claiming the Harris campaign was artificially inflating crowd sizes at her rallies.
Disinformation experts have warned about the threat posed to election integrity by generative AI tools, and this example highlights the issue.
@Ai_Events
.
Former President Donald Trump has been caught spreading fake AI-generated images claiming Taylor Swift fans are supporting his campaign. He shared four screenshots on Truth Social, purportedly showing young women wearing 'Swifties for Trump' T-shirts.
However, an analysis by WIRED found that several of the images show 'substantial evidence of manipulation', with some potentially created by an anonymous pro-Trump account with over 300,000 followers.
The so-called 'Swifties for Trump' campaign appears to be a fabrication, with no real evidence of an active initiative. Meanwhile, there is a Swifties4Kamala group, but its cofounder emphasizes that they do not represent all Swifties.
Trump has a history of sharing AI-generated images, including one from an anonymous pro-Trump account claiming the Harris campaign was artificially inflating crowd sizes at her rallies.
Disinformation experts have warned about the threat posed to election integrity by generative AI tools, and this example highlights the issue.
@Ai_Events
.
🗿2👎1😁1🤣1🆒1
Condé Nast Partners with OpenAI to Use Content in ChatGPT and SearchGPT
Condé Nast and OpenAI have struck a multi-year deal, allowing the AI giant to use content from the media giant's roster of properties, including WIRED, on both ChatGPT and SearchGPT.
The deal aims to meet audiences where they are and ensure proper attribution and compensation for the use of intellectual property. Condé Nast CEO Roger Lynch highlighted the ongoing turmoil within the publishing industry and the need for revenue from deals like this to continue investing in journalism and creative endeavors.
Specific terms of the partnership have not been disclosed, but OpenAI declined to comment. The deal has raised concerns among NewsGuild of New York members, who are seeking transparency on how the technology will be used and its potential impact on their work.
Condé Nast joins a growing list of media companies partnering with generative AI companies, including The Atlantic, Axel Springer, and TIME. As major AI companies increasingly gather training data through scraping, publishers face a choice: allowing it and risking the impact on their online visibility or not and risking the loss of their content's discoverability.
This deal has drawn criticism from some, with The Information's CEO Jessica Lessin comparing it to 'settling without litigation' and arguing that publishers are trading their credibility for cash. Condé Nast employees have also expressed concerns, with some questioning the ethics of training AI tools that could spread misinformation.
@Ai_Events
.
Condé Nast and OpenAI have struck a multi-year deal, allowing the AI giant to use content from the media giant's roster of properties, including WIRED, on both ChatGPT and SearchGPT.
The deal aims to meet audiences where they are and ensure proper attribution and compensation for the use of intellectual property. Condé Nast CEO Roger Lynch highlighted the ongoing turmoil within the publishing industry and the need for revenue from deals like this to continue investing in journalism and creative endeavors.
Specific terms of the partnership have not been disclosed, but OpenAI declined to comment. The deal has raised concerns among NewsGuild of New York members, who are seeking transparency on how the technology will be used and its potential impact on their work.
Condé Nast joins a growing list of media companies partnering with generative AI companies, including The Atlantic, Axel Springer, and TIME. As major AI companies increasingly gather training data through scraping, publishers face a choice: allowing it and risking the impact on their online visibility or not and risking the loss of their content's discoverability.
This deal has drawn criticism from some, with The Information's CEO Jessica Lessin comparing it to 'settling without litigation' and arguing that publishers are trading their credibility for cash. Condé Nast employees have also expressed concerns, with some questioning the ethics of training AI tools that could spread misinformation.
@Ai_Events
.
AI tea talks Singapore
A Neural Network Approach for Human Visual Learning
Ru-Yuan Zhang
Associate Professor at Shanghai Jiao Tong University
Thu Aug 22th 8 pm Singapore/Beijing time
Thu Aug 22th 3:30 PM Tehran time
Thu Aug 22th 1 pm London time
Thu Aug 22th 8 am New York time
Zoom Link
More information: https://aiteatalksingapore.github.io/
@Ai_Events
A Neural Network Approach for Human Visual Learning
Ru-Yuan Zhang
Associate Professor at Shanghai Jiao Tong University
Thu Aug 22th 8 pm Singapore/Beijing time
Thu Aug 22th 3:30 PM Tehran time
Thu Aug 22th 1 pm London time
Thu Aug 22th 8 am New York time
Zoom Link
More information: https://aiteatalksingapore.github.io/
@Ai_Events
Boards Must Improve AI Governance for Responsible Use
According to Carine Smith Ihenacho, Norway's sovereign wealth fund, boards need to be proficient with AI and take control of its application in businesses to mitigate risks. The fund has recommended responsible AI practices to its invested companies, emphasizing the importance of robust governance structures to manage AI-related risks.
The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, focusing on AI use in the healthcare sector due to its substantial impact on consumers. The fund's adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies.
As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.
The fund's emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. This underscores the significant role that technology and AI play in the world today.
@Ai_Events
.
According to Carine Smith Ihenacho, Norway's sovereign wealth fund, boards need to be proficient with AI and take control of its application in businesses to mitigate risks. The fund has recommended responsible AI practices to its invested companies, emphasizing the importance of robust governance structures to manage AI-related risks.
The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, focusing on AI use in the healthcare sector due to its substantial impact on consumers. The fund's adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies.
As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.
The fund's emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. This underscores the significant role that technology and AI play in the world today.
@Ai_Events
.
AI Capabilities Growing Faster Than Hardware: Can Decentralization Close the Gap?
AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools.
The recent McKinsey survey revealed that the number of companies that have adopted generative AI in at least one business function doubled to 65% within a year.
However, training and running AI programs is a resource-intensive endeavor, and big tech seems to have an upper hand, creating the risk of AI centralization.
The World Economic Forum and Epoch AI projections show an accelerating demand for AI compute, with computational power growing at an annual rate of 26-36%.
Microsoft, Google, Alphabet, and Nvidia are investing heavily in AI research and development, leaving smaller companies struggling to access computing power.
Decentralized computing infrastructures like Qubic, a Layer 1 blockchain, offer an alternative, using miners to provide computational power.
This decentralized approach could reduce costs and increase innovation, making it easier for more stakeholders to develop AI solutions.
The challenge of accessing computational power is a hindrance to AI innovation, and decentralization could be the solution to close the gap.
@Ai_Events
.
AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools.
The recent McKinsey survey revealed that the number of companies that have adopted generative AI in at least one business function doubled to 65% within a year.
However, training and running AI programs is a resource-intensive endeavor, and big tech seems to have an upper hand, creating the risk of AI centralization.
The World Economic Forum and Epoch AI projections show an accelerating demand for AI compute, with computational power growing at an annual rate of 26-36%.
Microsoft, Google, Alphabet, and Nvidia are investing heavily in AI research and development, leaving smaller companies struggling to access computing power.
Decentralized computing infrastructures like Qubic, a Layer 1 blockchain, offer an alternative, using miners to provide computational power.
This decentralized approach could reduce costs and increase innovation, making it easier for more stakeholders to develop AI solutions.
The challenge of accessing computational power is a hindrance to AI innovation, and decentralization could be the solution to close the gap.
@Ai_Events
.
Primate Labs Launches Geekbench AI Benchmarking Tool
Primate Labs has launched Geekbench AI, a benchmarking tool designed for machine learning and AI-centric workloads. The tool provides a standardized method for measuring and comparing AI capabilities across different platforms and architectures.
Geekbench AI offers three overall scores, reflecting the complexity and heterogeneity of AI workloads, and includes accuracy measurements for each test. The tool supports a wide range of AI frameworks, including OpenVINO, TensorFlow Lite, and more.
The benchmark is integrated with the Geekbench Browser, allowing for easy cross-platform comparisons and result sharing. Primate Labs anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features.
Major tech companies like Samsung and Nvidia have already begun utilizing the benchmark, and Primate Labs believes that Geekbench AI has reached a level of reliability suitable for integration into professional workflows.
@Ai_Events
.
Primate Labs has launched Geekbench AI, a benchmarking tool designed for machine learning and AI-centric workloads. The tool provides a standardized method for measuring and comparing AI capabilities across different platforms and architectures.
Geekbench AI offers three overall scores, reflecting the complexity and heterogeneity of AI workloads, and includes accuracy measurements for each test. The tool supports a wide range of AI frameworks, including OpenVINO, TensorFlow Lite, and more.
The benchmark is integrated with the Geekbench Browser, allowing for easy cross-platform comparisons and result sharing. Primate Labs anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features.
Major tech companies like Samsung and Nvidia have already begun utilizing the benchmark, and Primate Labs believes that Geekbench AI has reached a level of reliability suitable for integration into professional workflows.
@Ai_Events
.
AI Revolutionizes the World of Gaming
Artificial Intelligence (AI) is transforming the gaming industry in various ways, from enhancing NPCs to adjusting game difficulty and content. With AI, NPCs can now behave more humanly, react to their environment, and respond differently to player choices.
Online casinos are also utilizing AI to better understand player preferences and detect unusual behavior, preventing fraud. Social casinos recommend games to players using AI, creating a personalized experience.
AI is being used in console games to create personalized storylines and quests based on player actions, as well as adaptive difficulty levels. AI also improves game visuals with technologies like AI upscaling and ray tracing, making games more realistic.
Overall, AI is transforming the gaming industry, providing more realistic and personalized experiences for players.
@Ai_Events
.
Artificial Intelligence (AI) is transforming the gaming industry in various ways, from enhancing NPCs to adjusting game difficulty and content. With AI, NPCs can now behave more humanly, react to their environment, and respond differently to player choices.
Online casinos are also utilizing AI to better understand player preferences and detect unusual behavior, preventing fraud. Social casinos recommend games to players using AI, creating a personalized experience.
AI is being used in console games to create personalized storylines and quests based on player actions, as well as adaptive difficulty levels. AI also improves game visuals with technologies like AI upscaling and ray tracing, making games more realistic.
Overall, AI is transforming the gaming industry, providing more realistic and personalized experiences for players.
@Ai_Events
.
AI Growth Outpacing Security Measures
A recent survey by PSA Certified has revealed that the rapid growth of AI is outstripping the industry's ability to safeguard products, devices, and services. Two-thirds of 1,260 global technology decision-makers are concerned that the speed of AI advancements is leaving security measures behind.
The survey highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach is deemed essential to building consumer trust and mitigating escalating security risks.
While AI is a huge opportunity, its proliferation also offers the same opportunity to bad actors. Only half of respondents believe their current security investments are sufficient, and essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents.
Industry leaders emphasize the importance of prioritizing security investment and taking a collective responsibility to ensure consumer trust in AI-driven services is maintained. A majority of decision-makers believe their organizations are equipped to handle the potential security risks associated with AI's surge.
@Ai_Events
.
A recent survey by PSA Certified has revealed that the rapid growth of AI is outstripping the industry's ability to safeguard products, devices, and services. Two-thirds of 1,260 global technology decision-makers are concerned that the speed of AI advancements is leaving security measures behind.
The survey highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach is deemed essential to building consumer trust and mitigating escalating security risks.
While AI is a huge opportunity, its proliferation also offers the same opportunity to bad actors. Only half of respondents believe their current security investments are sufficient, and essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents.
Industry leaders emphasize the importance of prioritizing security investment and taking a collective responsibility to ensure consumer trust in AI-driven services is maintained. A majority of decision-makers believe their organizations are equipped to handle the potential security risks associated with AI's surge.
@Ai_Events
.
The AI Revolution: Reshaping Data Centres and the Digital Landscape
Artificial intelligence (AI) is changing the world, with a projected global market value of $2-4 trillion USD by 2030. The future is now, and AI has crept into every facet of our lives, transforming work and play.
AI is simulated human intelligence processes, including learning, reasoning, and self-correction. The surge of AI is staggering, with examples like ChatGPT reaching a million users in just five days.
However, AI has a large appetite for data, requiring enormous computational power for processing. Data centres are the backbones of the digital world, evolving into entire ecosystems to facilitate the flow of information.
Data centres need efficient delivery of data worldwide, requiring power, connectivity, and cooling systems. As AI demands grow, so does the need for compatibility with data centre infrastructure.
Integrating AI presents challenges, including power, connectivity, and cooling. AI is ever-emerging, and regulatory changes must be made, such as the EU's AI Act and NIS2 Directive.
@Ai_Events
.
Artificial intelligence (AI) is changing the world, with a projected global market value of $2-4 trillion USD by 2030. The future is now, and AI has crept into every facet of our lives, transforming work and play.
AI is simulated human intelligence processes, including learning, reasoning, and self-correction. The surge of AI is staggering, with examples like ChatGPT reaching a million users in just five days.
However, AI has a large appetite for data, requiring enormous computational power for processing. Data centres are the backbones of the digital world, evolving into entire ecosystems to facilitate the flow of information.
Data centres need efficient delivery of data worldwide, requiring power, connectivity, and cooling systems. As AI demands grow, so does the need for compatibility with data centre infrastructure.
Integrating AI presents challenges, including power, connectivity, and cooling. AI is ever-emerging, and regulatory changes must be made, such as the EU's AI Act and NIS2 Directive.
@Ai_Events
.
🙏1
xAI Unveils Grok-2 to Challenge AI Hierarchy
xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. The upgrade includes a smaller but capable version called Grok-2 mini, which will be made available through xAI's enterprise API later this month.
Grok-2 has shown significant improvements in reasoning with retrieved content and in its tool use capabilities, such as correctly identifying missing information, reasoning through sequences of events, and discarding irrelevant posts.
The new Grok interface on X features a redesigned interface and new features. Premium and Premium+ subscribers will have access to both Grok-2 and Grok-2 mini.
xAI is also collaborating with Black Forest Labs to experiment with their FLUX.1 model to expand Grok's capabilities on X. The company plans to roll out multimodal understanding as a core part of the Grok experience on both X and the API.
While the release of Grok-2 marks a significant milestone for xAI, it's clear that the AI landscape remains highly competitive, with ChatGPT-4o and Google's Gemini 1.5 leading the pack.
@Ai_Events
.
xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. The upgrade includes a smaller but capable version called Grok-2 mini, which will be made available through xAI's enterprise API later this month.
Grok-2 has shown significant improvements in reasoning with retrieved content and in its tool use capabilities, such as correctly identifying missing information, reasoning through sequences of events, and discarding irrelevant posts.
The new Grok interface on X features a redesigned interface and new features. Premium and Premium+ subscribers will have access to both Grok-2 and Grok-2 mini.
xAI is also collaborating with Black Forest Labs to experiment with their FLUX.1 model to expand Grok's capabilities on X. The company plans to roll out multimodal understanding as a core part of the Grok experience on both X and the API.
While the release of Grok-2 marks a significant milestone for xAI, it's clear that the AI landscape remains highly competitive, with ChatGPT-4o and Google's Gemini 1.5 leading the pack.
@Ai_Events
.
EU Takes Action Against X's Use of EU User Data for AI Chatbot Training
The European Union has taken action against social media platform X, ordering the company to suspend the use of all data belonging to EU citizens for training its AI systems. This decision follows a complaint from the Irish Data Protection Commission (DPC), which has been monitoring X's data processing activities.
The DPC sought an order to restrain or suspend X's data processing activities on users for the development, training, and refinement of its AI system. This move marks a growing conflict between AI advances and ongoing data protection concerns in the EU.
X has agreed to pause the use of certain EU user data for AI chatbot training, citing concerns that the DPC's order would undermine its efforts to keep the platform safe and restrict its use of technologies in the EU. The company claims to have been fully transparent about the use of public data for AI models, including providing necessary legal assessments and engaging in lengthy discussions with regulators.
The regulatory action against X is not an isolated incident. Other tech giants, such as Meta Platforms and Google, have also faced similar scrutiny in recent months. Regulators are taking a more active role in overseeing how tech companies utilise user data for AI training and development, reflecting growing concerns about data privacy and the ethical implications of AI advancement.
The outcome of this case could set important precedents for how AI development is regulated in the EU, potentially influencing global standards for data protection in the AI era. The tech industry and privacy advocates alike will be watching closely as this situation develops, recognising its potential to shape the future of AI innovation and data privacy regulations.
@Ai_Events
.
The European Union has taken action against social media platform X, ordering the company to suspend the use of all data belonging to EU citizens for training its AI systems. This decision follows a complaint from the Irish Data Protection Commission (DPC), which has been monitoring X's data processing activities.
The DPC sought an order to restrain or suspend X's data processing activities on users for the development, training, and refinement of its AI system. This move marks a growing conflict between AI advances and ongoing data protection concerns in the EU.
X has agreed to pause the use of certain EU user data for AI chatbot training, citing concerns that the DPC's order would undermine its efforts to keep the platform safe and restrict its use of technologies in the EU. The company claims to have been fully transparent about the use of public data for AI models, including providing necessary legal assessments and engaging in lengthy discussions with regulators.
The regulatory action against X is not an isolated incident. Other tech giants, such as Meta Platforms and Google, have also faced similar scrutiny in recent months. Regulators are taking a more active role in overseeing how tech companies utilise user data for AI training and development, reflecting growing concerns about data privacy and the ethical implications of AI advancement.
The outcome of this case could set important precedents for how AI development is regulated in the EU, potentially influencing global standards for data protection in the AI era. The tech industry and privacy advocates alike will be watching closely as this situation develops, recognising its potential to shape the future of AI innovation and data privacy regulations.
@Ai_Events
.
👍1
SingularityNET Bets on Supercomputer Network to Deliver AGI
SingularityNET is developing a network of powerful supercomputers to achieve Artificial General Intelligence (AGI) by 2025. The first supercomputer is set to be completed in September and will be a Frankensteinian beast of cutting-edge hardware.
The network will host and train complex AI architectures, mimicking the human brain, and featuring vast language models, deep neural networks, and systems that integrate human behaviors with multimedia outputs.
To manage the distributed network, SingularityNET has developed OpenCog Hyperon, an open-source software framework for AI systems. Users will purchase access to the network with the AGIX token on Ethereum and Cardano blockchains and contribute data to the collective pool.
Experts predict human-level AI by 2028, and SingularityNET's plan is to create a paradigmatic shift towards continuous learning, seamless generalization, and reflexive AI self-modification.
@Ai_Events
.
SingularityNET is developing a network of powerful supercomputers to achieve Artificial General Intelligence (AGI) by 2025. The first supercomputer is set to be completed in September and will be a Frankensteinian beast of cutting-edge hardware.
The network will host and train complex AI architectures, mimicking the human brain, and featuring vast language models, deep neural networks, and systems that integrate human behaviors with multimedia outputs.
To manage the distributed network, SingularityNET has developed OpenCog Hyperon, an open-source software framework for AI systems. Users will purchase access to the network with the AGIX token on Ethereum and Cardano blockchains and contribute data to the collective pool.
Experts predict human-level AI by 2028, and SingularityNET's plan is to create a paradigmatic shift towards continuous learning, seamless generalization, and reflexive AI self-modification.
@Ai_Events
.
Forwarded from Lex Fridman
Arrest of Pavel Durov is a disturbing attack on free speech and a threat not just to Telegram but to any online platform.
Governments should not engage in censorship. This is a blatant and deeply troubling overreach of power.
Governments should not engage in censorship. This is a blatant and deeply troubling overreach of power.
OpenAI Launches GPT-4o Fine-Tuning
OpenAI has announced the release of fine-tuning capabilities for its GPT-4o model, allowing developers to tailor the model to their specific needs.
Fine-tuning enables granular control over the model's responses, allowing for customization of structure, tone, and even the ability to follow intricate, domain-specific instructions.
Developers can achieve impressive results with training datasets comprising as little as a few dozen examples, making it accessible for improvements across various domains.
OpenAI is providing one million free training tokens per day for every organization until September 23rd, sweetening the deal for developers.
@Ai_Events
.
OpenAI has announced the release of fine-tuning capabilities for its GPT-4o model, allowing developers to tailor the model to their specific needs.
Fine-tuning enables granular control over the model's responses, allowing for customization of structure, tone, and even the ability to follow intricate, domain-specific instructions.
Developers can achieve impressive results with training datasets comprising as little as a few dozen examples, making it accessible for improvements across various domains.
OpenAI is providing one million free training tokens per day for every organization until September 23rd, sweetening the deal for developers.
@Ai_Events
.
👍1
Norway's Sovereign Wealth Fund Urges Boards to Improve AI Governance
Norway's $1.7 trillion sovereign wealth fund is urging companies to improve their governance on the use of artificial intelligence (AI) to mitigate risks.
According to Carine Smith Ihenacho, chief governance and compliance officer, boards need to be proficient with the use of AI and take control of its application in businesses.
The fund has recommended that companies develop comprehensive AI policies at the board level, emphasizing the importance of robust governance structures to manage AI-related risks.
This initiative could have far-reaching implications for corporate governance practices globally, providing a blueprint for responsible AI implementation and governance in the corporate world.
@Ai_Events
.
Norway's $1.7 trillion sovereign wealth fund is urging companies to improve their governance on the use of artificial intelligence (AI) to mitigate risks.
According to Carine Smith Ihenacho, chief governance and compliance officer, boards need to be proficient with the use of AI and take control of its application in businesses.
The fund has recommended that companies develop comprehensive AI policies at the board level, emphasizing the importance of robust governance structures to manage AI-related risks.
This initiative could have far-reaching implications for corporate governance practices globally, providing a blueprint for responsible AI implementation and governance in the corporate world.
@Ai_Events
.
👍1
AI capabilities growing faster than hardware: Can decentralisation close the gap?
AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools. As companies adopt generative AI, the number of companies using it doubled within a year to 65%, up from 33% at the beginning of 2023.
However, training and running AI programs is a resource-intensive endeavour, and big tech seems to have an upper hand, which creates the risk of AI centralisation. According to a recent study, the computational power required to sustain AI development is currently growing at an annual rate of between 26% and 36%.
To reduce the costs associated with computing power, local governments in China have introduced subsidies, pledging to offer computing vouchers for AI startups. Additionally, decentralised computing infrastructures like Qubic Layer 1 blockchain are being developed to tap into its network of miners to provide computational power.
A decentralised approach to sourcing AI computational power is more economical and would only be fair if AI innovations were driven by more stakeholders as opposed to the current state where the industry relies on a few players. With decentralised AI innovations, it would be easier to check on the developments while reducing the cost of entry.
In conclusion, AI innovations are just getting started, but the challenge of accessing computational power is still a headwind. While big tech currently controls most of the resources, decentralised infrastructures offer a better chance of reducing computational costs and eliminating big tech control over one of the most valuable technologies of the 21st century.
@Ai_Events
.
AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools. As companies adopt generative AI, the number of companies using it doubled within a year to 65%, up from 33% at the beginning of 2023.
However, training and running AI programs is a resource-intensive endeavour, and big tech seems to have an upper hand, which creates the risk of AI centralisation. According to a recent study, the computational power required to sustain AI development is currently growing at an annual rate of between 26% and 36%.
To reduce the costs associated with computing power, local governments in China have introduced subsidies, pledging to offer computing vouchers for AI startups. Additionally, decentralised computing infrastructures like Qubic Layer 1 blockchain are being developed to tap into its network of miners to provide computational power.
A decentralised approach to sourcing AI computational power is more economical and would only be fair if AI innovations were driven by more stakeholders as opposed to the current state where the industry relies on a few players. With decentralised AI innovations, it would be easier to check on the developments while reducing the cost of entry.
In conclusion, AI innovations are just getting started, but the challenge of accessing computational power is still a headwind. While big tech currently controls most of the resources, decentralised infrastructures offer a better chance of reducing computational costs and eliminating big tech control over one of the most valuable technologies of the 21st century.
@Ai_Events
.