Telegram Web Link
Did you know we have a linkedin community where useful datascience related posts are shared frequently?

If you havent joined yet please join
https://www.linkedin.com/groups/13661437/
πŸ“£Hurry Up 50% sitewide special discount lasting only few hours on one of the leading online learning platforms πŸ“£

Unlimited Access - link

Trending Topics

Tech Interview Preparation/Mock Interviews - link

Learn To Code - link

Generative AI - link

DataScience
Skillpaths - link

All DataScience Courses
-
link

WebDevelopment - link

System Design - link

UI Design (React) - link

Others
-
link

For more: @coursenuggets
Join the FREE 5-Day πŸ€– Gen AI Intensive Course with Google

Join @coursenuggets for more
@kdnuggets @datasciencechats
https://bit.ly/genaifreecourse
The AI for Impact Hackathon, presented by Google Cloud and powered by Hack2skill, is a unique opportunity to leverage the transformative power of AI to address pressing social challenges across the APAC region.
You can participate in the hackathon for free with below link.

Deadline: 17th Nov

Register : https://bit.ly/4hEcMQI
Data Science, Machine Learning, AI & IOT pinned Β«Our channel is now on whatsapp too. Feel free to join. We will be sharing some exclusive content posts on whatsapp and linkedin. So dont miss out. Whatsapp:πŸ”— https://whatsapp.com/channel/0029Vaw8loEIXnlogBO5ma1H Linkedin:πŸ”— https://www.linkedin.com/groups/13622969Β»
Differences between RAG, Agents and Agentic RAG
@kdnuggets @datasciencechats

Subscribe to WhatsApp channel for the post
https://whatsapp.com/channel/0029Vaw8loEIXnlogBO5ma1H/105
Google announced Gemini 2.0, a more advanced AI model capable of native image and audio output and tool use. This new model powers several projects, including Project Astra (a universal AI assistant) and Project Mariner (browser-based task completion). Gemini 2.0 Flash, an experimental version, is available to developers, with wider release planned. Google emphasizes responsible AI development, prioritizing safety and security in its applications. The announcement highlights Gemini 2.0's integration into Google products and its potential to revolutionize user experience
https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
#gemini
@kdnuggets @datasciencechats
Did you know we have some unique learning posts on our WhatsApp channel that doesnt get shared here? Follow our whatsapp channel to keep up with those updates. (Your contact numbers stay anonymous on whatsapp channels)
https://bit.ly/dsnwhatsapp
Building LLMs - Stanford Course
#ai #generativeai #llm

https://www.youtube.com/watch?v=9vM4p9NN0Ts

@kdnuggets @datasciencechats

00:10 Building Large Language Models overview
02:21 Focus on data evaluation and systems in industry over architecture
06:25 Auto regressive language models predict the next word in a sentence.
08:26 Tokenizing text is crucial for language models
12:38 Training a large language model involves using a large corpus of text.
14:49 Tokenization process considerations
18:40 Tokenization improvement in GPT 4 for code understanding
20:31 Perplexity measures model hesitation between tokens
24:18 Comparing outputs and model prompting
26:15 Evaluation of language models can yield different results
30:15 Challenges in training large language models
32:06 Challenges in building large language models
35:57 Collecting real-world data is crucial for large language models
37:53 Challenges in building large language models
41:38 Scaling laws predict performance improvement with more data and larger models
43:33 Relationship between data, parameters, and compute
47:21 Importance of scaling laws in model performance
49:12 Quality of data matters more than architecture and losses in scaling laws
52:54 Inference for large language models is very expensive
54:54 Training large language models is costly
59:12 Post training aligns language models for AI assistant use
1:01:05 Supervised fine-tuning for large language models
1:04:50 Leveraging large language models for data generation and synthesis
1:06:49 Balancing data generation and human input for effective learning
1:10:23 Limitations of human abilities in generating large language models
1:12:12 Training language models to maximize human preference instead of cloning human behaviors.
1:16:06 Training reward model using softmax logits for human preferences.
1:18:02 Modeling optimization and challenges in large language models (LLMs)
1:21:49 Reinforcement learning models and potential benefits
1:23:44 Challenges with using humans for data annotation
1:27:21 LLMs are cost-effective and have better agreement with humans than humans themselves
1:29:12 Perplexity is not calibrated for large language models
1:33:00 Variance in performance of GPT-4 based on prompt specificity
1:34:51 Pre-training data plays a vital role in model initialization
1:38:32 Utilize GPUs efficiently with matrix multiplication
1:40:21 Utilizing 16 bits for faster training in deep learning
1:44:08 Building Large Language Models from scratch
πŸš€ Introducing GPT-5: Launched 32 months after ChatGPT, GPT-5 is hailed as a "major upgrade" and a "significant step along the path to AGI." It's described as conversing with a "PhD level expert" across any field, a substantial leap from previous models.

πŸ“ˆ Unprecedented Growth & Impact: ChatGPT now boasts 700 million weekly users, relying on it for work, learning, advice, and creation. GPT-5 aims to be intuitive, useful, smart, and fast.

πŸ’‘ Enhanced Reasoning and Capabilities: GPT-5 incorporates a "reasoning paradigm" allowing it to "pause to think" for more intelligent, precise answers, eliminating the trade-off between speed and thoughtfulness. It can write entire computer programs, plan events, and explain complex health information.

πŸ“Š Superior Performance Metrics:

Coding: Sets new highs on SWEBench (real software engineering tasks) and Aider Polyglot (multilingual programming).

Reasoning: Outperforms previous models and most human experts on MMMU (multimodal reasoning) and AIME (mathematical reasoning).

Reliability: Significantly reduces hallucinations, making it the "most reliable, most factual model ever," and performs exceptionally well on health-related questions.


🌐 Broad Accessibility & Tiered Access: GPT-5 is rolling out immediately, available to free, Plus, Pro, NT, Enterprise, and EDU users. Free users get GPT-5 initially before transitioning to Mini, while paid tiers receive higher or unlimited usage with extended thinking capabilities.

πŸ› οΈ Powerful Integrations & Personalization: All existing ChatGPT tools (search, file/image upload, data analysis, image generation, memory, custom instructions) work seamlessly with GPT-5. New features include customizable chat colors, experimental "personalities" (supportive, sarcastic), and crucial integrations with Gmail and Google Calendar for enhanced scheduling and personal assistance.

πŸ›‘οΈ Advanced Safety Features: OpenAI has overhauled safety training with "safe completion," which aims to maximize helpfulness within safety constraints, offering partial answers or alternatives instead of outright refusals. GPT-5 is also significantly less deceptive.

πŸ§ͺ Recursive Model Improvement: New training techniques involve using AI itself to generate high-quality synthetic data and curriculum, creating a "recursive improvement loop" where older models enhance the training data for newer generations.

βš•οΈ Transformative Healthcare Application: Highlighted as a top use case, GPT-5 is the "best model ever for health," scoring highly on the HelpBench evaluation. A personal testimony demonstrated its ability to translate complex medical reports into plain language, aid in critical decision-making, and empower patients.

πŸ’» Revolutionizing Coding: GPT-5 is proclaimed the "best coding model in the world," excelling at "Agentic coding tasks" where it can autonomously tackle complex problems, build entire web apps (like a French learning app or a finance dashboard), and even fix its own code. It also exhibits a strong sense of aesthetics in front-end development.

🀝 Developer Focus & API Enhancements: Available in API today (GPT-5, Mini, Nano), with tiered pricing. New API features include a "reasoning effort" parameter for latency control, "Custom Tools" for flexible tool calls, "Tool Call Preambles" for explanations, and a "Verbosity programmer" for output control. The context window has doubled to 400K tokens.

🏒 Enterprise & Government Adoption: Over 5 million businesses already use OpenAI technology, with GPT-5 expected to be a "step function" in enabling industries like life sciences (Amgen), finance (BBVA), and healthcare (Oscar Health). Two million US federal employees will also gain access to GPT-5 and ChatGPT.

@kdnuggets @datasciencechats
Here are latest Gemini image editing features:
🎨 Maintaining Likeness: Photos of people and pets consistently look like themselves even when you change their hairstyle or outfit.

πŸ“ Change Scenarios: Place a person or pet in new locations or give them a new look while keeping their original appearance.

πŸ”„ Blend Photos: Combine multiple photos to create a new sceneβ€”like you and your dog on a basketball court!

✏️ Multi-turn Editing: Continuously edit an image. Start with an empty room, paint the walls, then add furniture.

✨ Mix Designs: Apply the style or texture from one image to an object in anotherβ€”like putting a butterfly's wing pattern on a dress.

Read More: https://blog.google/intl/en-mena/product-updates/explore-get-answers/nano-banana-image-editing-in-gemini-just-got-a-major-upgrade/
@kdnuggets @datasciencechats
2025/09/18 02:18:32
Back to Top
HTML Embed Code: