Forwarded from Alexander Chichigin
https://www.youtube.com/watch?v=B7aPcZM_JXo
Если кто-то ещё не видел как теорию категорий и (профункторную) оптику применяют к ML (в виде автоматического дифференцирования) прямо в динамически-типизированной Julia. 😏
Если кто-то ещё не видел как теорию категорий и (профункторную) оптику применяют к ML (в виде автоматического дифференцирования) прямо в динамически-типизированной Julia. 😏
YouTube
Keno Fischer: "Optics in the wild: reverse mode automatic differentiation in Julia"
Intercats: 28th June 2022
———
Using categorical inspiration in real world software systems: "I'll definitely be talking about the optics formalism of reverse mode automatic differentiation, but if I have space, I might end up talking about some more recent…
———
Using categorical inspiration in real world software systems: "I'll definitely be talking about the optics formalism of reverse mode automatic differentiation, but if I have space, I might end up talking about some more recent…
https://arxiv.org/abs/2209.11339 Faul, Manuell, [2022] "Machine Space I: Weak exponentials and quantification over compact spaces"
arXiv.org
Machine Space I: Weak exponentials and quantification over compact spaces
Topology may be interpreted as the study of verifiability, where opens correspond to semi-decidable properties. In this paper we make a distinction between verifiable properties themselves and...
Forwarded from AlexTCH
https://lemire.me/blog/2019/10/16/benchmarking-is-hard-processors-learn-to-predict-branches/
Dang! CPUs really do learn to predict branches. And learn fast!
If you're trying to benchmark how your code handles "cold" (fresh) data it will screw your tests real good.
Dang! CPUs really do learn to predict branches. And learn fast!
If you're trying to benchmark how your code handles "cold" (fresh) data it will screw your tests real good.
The recordings of ICFP 2022 are available online, https://www.youtube.com/playlist?list=PLyrlk8Xaylp4ee6ZAtFD9XMD2EZ02K9xK
YouTube
ICFP 2022 - YouTube
Forwarded from AlexTCH
https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
If you strip all the nuances DeepMind found a way to represent matrix multiplication as a single-player game with scores proportional to algorithm efficiency and fed it into AlphaZero, which is notoriously good at games. And indeed properly modified AlphaZero dubbed AlphaTensor found new State-of-the-Art matrix multiplication algorithms for a wide range of fixed matrix sizes, including ones optimized for GPGPUs and TPUs specifically.
In a broader context this is indeed a huge leap in applying Reinforcement Learning to algorithms research. Expect a thick stream of papers feeding various kinds of algorithmic problems into more or less the same system.
If you strip all the nuances DeepMind found a way to represent matrix multiplication as a single-player game with scores proportional to algorithm efficiency and fed it into AlphaZero, which is notoriously good at games. And indeed properly modified AlphaZero dubbed AlphaTensor found new State-of-the-Art matrix multiplication algorithms for a wide range of fixed matrix sizes, including ones optimized for GPGPUs and TPUs specifically.
In a broader context this is indeed a huge leap in applying Reinforcement Learning to algorithms research. Expect a thick stream of papers feeding various kinds of algorithmic problems into more or less the same system.
Google DeepMind
Discovering novel algorithms with AlphaTensor
In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental...
In honor of Dana Scott’s 90th birthday,
a Git repository containing pdf scans of
a selection of his papers has been established.
These are available for public download here:
https://github.com/CMU-HoTT/scott
a Git repository containing pdf scans of
a selection of his papers has been established.
These are available for public download here:
https://github.com/CMU-HoTT/scott
GitHub
GitHub - CMU-HoTT/scott: Selected Papers of Dana S. Scott
Selected Papers of Dana S. Scott. Contribute to CMU-HoTT/scott development by creating an account on GitHub.
Forwarded from Протестировал (Sergey Bronnikov)
Помните SETI@home? Проект анализировал радиосигнал из космоса для поиска внеземного разума используя для этого вычислительные ресурсы добровольцев. По аналогии с SETI@Home есть проект Fuzzing@Home, в котором можно запускать фаззинг проектов, добавленных в OSS Fuzz, в обычном веб браузере. Это возможно благодаря компиляции в WebAssembly. Попробуйте сами - http://fuzzcoin.gtisc.gatech.edu:8000/
https://www.cs.ox.ac.uk/people/samuel.staton/papers/popl23.pdf Dash, Kaddar, Paquet, Staton, "Affine monads and lazy structures for Bayesian programming"
https://lazyppl.bitbucket.io/
https://lazyppl.bitbucket.io/
Higher-Order Leak and Deadlock Free Locks
by Jules Jacobs, Stephanie Balzer
https://julesjacobs.com/pdf/locks.pdf
by Jules Jacobs, Stephanie Balzer
https://julesjacobs.com/pdf/locks.pdf
Forwarded from Sergey Bronnikov
Похоже JB решили реанимировать семинары
[JB-PLT Seminar] Weak Memory Models 101 (Anton Podkopaev)
In this talk, we introduce weak memory concurrency, consider requirements imposed on PL memory models, and examine ones used by industry (C11 [Batty-al:POPL11] and Java [Manson-al:POPL05]) and their drawbacks. Then, we explore new memory models (RC11 [Lahav-al:PLDI17], MRD [Paviotti-al:ESOP20], Promising 1.0 [Kang-al:POPL17], Promising 2.0 [Hwan-al:PLDI20], Weakestmo [Chakraborty-Vafeiadis:POPL19]) proposed as a solution for the drawbacks: what these models provide, which compromises they take, how expensive performance-wise, if at all, these compromises are, and how hard is to adapt the models for mainstream languages.
When: November, 21, 16:00 (CET)
Where: online. Google Meet: https://meet.google.com/myu-dhmz-gvu
[JB-PLT Seminar] Weak Memory Models 101 (Anton Podkopaev)
In this talk, we introduce weak memory concurrency, consider requirements imposed on PL memory models, and examine ones used by industry (C11 [Batty-al:POPL11] and Java [Manson-al:POPL05]) and their drawbacks. Then, we explore new memory models (RC11 [Lahav-al:PLDI17], MRD [Paviotti-al:ESOP20], Promising 1.0 [Kang-al:POPL17], Promising 2.0 [Hwan-al:PLDI20], Weakestmo [Chakraborty-Vafeiadis:POPL19]) proposed as a solution for the drawbacks: what these models provide, which compromises they take, how expensive performance-wise, if at all, these compromises are, and how hard is to adapt the models for mainstream languages.
When: November, 21, 16:00 (CET)
Where: online. Google Meet: https://meet.google.com/myu-dhmz-gvu
Google
Real-time meetings by Google. Using your browser, share your video, desktop, and presentations with teammates and customers.
Building the fastest Lua interpreter.. automatically!
https://sillycross.github.io/2022/11/22/2022-11-22/
https://sillycross.github.io/2022/11/22/2022-11-22/
sillycross.github.io
Building the fastest Lua interpreter.. automatically!
This is Part 1 of a series of posts. Part 2 is available here: Building a baseline JIT for Lua automatically It is well-known that writing a good VM for a dynamic language is never an easy job. High
Reconciling Shannon and Scott with a Lattice of Computable Information
Sebastian Hunt, David Sands, Sandro Stucki
https://arxiv.org/abs/2211.10099
Sebastian Hunt, David Sands, Sandro Stucki
https://arxiv.org/abs/2211.10099
В 18-м году здесь мы упоминали язык flix, https://flix.dev/. Уже можно сказать, что с языком все хорошо, люди над языком работают, компетенции им, видимо, хватает, релизы релизятся https://twitter.com/flixlang/status/1596035647000948736
Сам язык с академической точки зрения довольно интересный; помимо того, что написано на главной странице, можно еще почитать эту работу 2020-го года:
https://flix.dev/paper/oopsla2020b.pdf
Сам язык с академической точки зрения довольно интересный; помимо того, что написано на главной странице, можно еще почитать эту работу 2020-го года:
https://flix.dev/paper/oopsla2020b.pdf
https://srfi.schemers.org/srfi-226/srfi-226.html
An attempt to rethink how the continuations are built in Scheme. One of the reasons for that rethink was the famous argument of Oleg against call/cc. Why does it matter: that is a bleeding edge of the control flow stuff. Today in Scheme,tomorrow in 10 years — everywhere.
An attempt to rethink how the continuations are built in Scheme. One of the reasons for that rethink was the famous argument of Oleg against call/cc. Why does it matter: that is a bleeding edge of the control flow stuff. Today in Scheme,