https://x.com/bryanrbeal/status/1781454698136109380?s=46&t=Xqv-8tHUNkdwwhoJQlQOjQ
Also, easy to guess which company will be able to effectively scale it to millions of customers
Also, easy to guess which company will be able to effectively scale it to millions of customers
New type of neural networks just dropped. Authors say KANs are both accurate and interpretable.
Compared to MLPs, where weights are just a scalar value, in KANs they are learnable univariate functions. This adds to its better interpretability since activation functions can be now visualized more effectively.
Just a main takeaway, I guess we will see more research on them in the coming weeks.
https://arxiv.org/abs/2404.19756
Compared to MLPs, where weights are just a scalar value, in KANs they are learnable univariate functions. This adds to its better interpretability since activation functions can be now visualized more effectively.
Just a main takeaway, I guess we will see more research on them in the coming weeks.
https://arxiv.org/abs/2404.19756
arXiv.org
KAN: Kolmogorov-Arnold Networks
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation...
Собственно говоря, сам автор объясняет:
https://x.com/zimingliu11/status/1785483967719981538?s=46&t=Xqv-8tHUNkdwwhoJQlQOjQ
https://x.com/zimingliu11/status/1785483967719981538?s=46&t=Xqv-8tHUNkdwwhoJQlQOjQ
X (formerly Twitter)
Ziming Liu (@ZimingLiu11) on X
MLPs are so foundational, but are there alternatives? MLPs place activation functions on neurons, but can we instead place (learnable) activation functions on weights? Yes, we KAN! We propose Kolmogorov-Arnold Networks (KAN), which are more accurate and interpretable…
Great talk, have been a huge fan of Scott Galloway recently
https://youtu.be/qEJ4hkpQW8E?si=lZ-uOEiPvgCpH1oQ
https://youtu.be/qEJ4hkpQW8E?si=lZ-uOEiPvgCpH1oQ
YouTube
How the US Is Destroying Young People’s Future | Scott Galloway | TED
In a scorching talk, marketing professor and podcaster Scott Galloway dissects the data showing that, by many measures, young people in the US are worse off financially than ever before. He unpacks the root causes and effects of this "great intergenerational…
What a great interview. I haven’t finished watching it yet but from beginning I have a thought: right now is probably the worst time to do PhD/research in AI. It is probably an unpopular opinion.
Why? When I was applying for PhD last semester (I ended up not applying anywhere), I saw bunch of labs doing research in LLMs. I agree with Yann LeCun that LLMs have a huge problem: they lack essential capabilities for intelligent beings, such as understanding and reasoning about the physical world.
Yes, they can be great for replacing people in low stakes situations like customer assistance, or generating emails, etc. – that’s where bunch of startup ideas come from. But they rely solely on language as a medium for reasoning.
Think about your reasoning. Is everything you think about contains language? When you fill out your water bottle, and try not to overflow it, are you producing any language to guide this process? There is more vision and understanding of physics involved, rather than language. In fact, you would be able to do that even before you learn how to speak and understand language.
And there are more such problems with LLMs.
So going back to my take on doing PhD. If you do PhD in AI, I think it should be something in fundamentals of AI, physics informed neural networks, AI in science, etc. That’s where hype of LLMs is avoided (mostly), and that’s where the next step to AGI is.
https://youtu.be/5t1vTLU7s40?si=jYx-S0J1WWjGlluA
Why? When I was applying for PhD last semester (I ended up not applying anywhere), I saw bunch of labs doing research in LLMs. I agree with Yann LeCun that LLMs have a huge problem: they lack essential capabilities for intelligent beings, such as understanding and reasoning about the physical world.
Yes, they can be great for replacing people in low stakes situations like customer assistance, or generating emails, etc. – that’s where bunch of startup ideas come from. But they rely solely on language as a medium for reasoning.
Think about your reasoning. Is everything you think about contains language? When you fill out your water bottle, and try not to overflow it, are you producing any language to guide this process? There is more vision and understanding of physics involved, rather than language. In fact, you would be able to do that even before you learn how to speak and understand language.
And there are more such problems with LLMs.
So going back to my take on doing PhD. If you do PhD in AI, I think it should be something in fundamentals of AI, physics informed neural networks, AI in science, etc. That’s where hype of LLMs is avoided (mostly), and that’s where the next step to AGI is.
https://youtu.be/5t1vTLU7s40?si=jYx-S0J1WWjGlluA
YouTube
Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416
Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI. Please support this podcast by checking out our sponsors:
- HiddenLayer: https://hiddenlayer.com/lex
- LMNT:…
- HiddenLayer: https://hiddenlayer.com/lex
- LMNT:…
Great food for brain: https://writings.stephenwolfram.com
Stephenwolfram
Stephen Wolfram Writings
Articles by Stephen Wolfram covering artificial intelligence, computational science and computational thinking, data science, education, future and historical perspectives, sciences, software design, technology, Wolfram products, more.
Большой респект моему профу по философии - классы уже давно закончились, а он продолжает нас обучать ... через email. Кстати, почитайте. Он очень грамотно всё объясняет.
В целом этот класс очень интересный, но настолько annoyingly painful, что ни любой CS курс который я брал настолько меня не з**бывал. Надеюсь проф меня не завалит в понедельник, и я смогу выпуститься.
В целом этот класс очень интересный, но настолько annoyingly painful, что ни любой CS курс который я брал настолько меня не з**бывал. Надеюсь проф меня не завалит в понедельник, и я смогу выпуститься.
Одно из самых классных чувств – это не спеша просыпаться без будильника (rest day in gym), делать себе чашку кофе, открывать окно, и начинать работать. Целый день впереди на осуществление своих целей.
Кстати, вчера узнал, что Lehigh alumni могут присоединиться к Penn Club: https://www.pennclub.org/lehigh
Вау, первое впечатление: GPT-4o реально крутая модель. Больше всего удивляет скорость ответа.
https://openai.com/index/hello-gpt-4o/
https://openai.com/index/hello-gpt-4o/
Openai
Hello GPT-4o
We’re announcing GPT-4 Omni, our new flagship model which can reason across audio, vision, and text in real time.
Abekek Notes
What a great interview. I haven’t finished watching it yet but from beginning I have a thought: right now is probably the worst time to do PhD/research in AI. It is probably an unpopular opinion. Why? When I was applying for PhD last semester (I ended up…
OpenReview
A Path Towards Autonomous Machine Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple...
Interesting article but not convincing argument though: LLMs may understand word meanings because we may not understand what the “meaning” itself is?
https://www.amazon.science/blog/do-large-language-models-understand-the-world
https://www.amazon.science/blog/do-large-language-models-understand-the-world
Amazon Science
Do large language models understand the world?
In addition to its practical implications, recent work on “meaning representations” could shed light on some old philosophical questions.
В Нью-Йорке есть моя любимая кофейня – Blank Street Coffee. Недавно взял подписку на их «Regulars» membership. Стоит $17.99 в неделю, и каждые два часа можно без дополнительной платы брать любой кофейный напиток + идет 20% скидка на каждый последующий .
Очень выгодно. Если пить по чашке кофе каждый день, то это уже выйдет в $5.5*7=$38.5 в неделю без подписки.
Очень выгодно. Если пить по чашке кофе каждый день, то это уже выйдет в $5.5*7=$38.5 в неделю без подписки.