Telegram Web Link
What is Entropy?
John C. Baez, 2024

https://arxiv.org/pdf/2409.09232
The authors show that, with the right algorithmic enhancements, first-order methods(*) can be competitive with state-of-the-art Linear Programming solvers, even for problems requiring high-accuracy solutions. This opens up new possibilities and computational trade-offs when solving LP problems, especially for large-scale instances.

* In numerical analysis, methods that have at most linear local error are called first order methods.

Blog: https://research.google/blog/scaling-up-linear-programming-with-pdlp/

Paper (go read the blog for intro): https://www.openread.academy/en/paper/reading?corpusId=235376806
A Science of Concurrent Programs (2024)

L. Lamport

https://lamport.azurewebsites.net/tla/science.pdf
Cloud-Native Database Systems and Unikernels: Reimagining OS Abstractions for Modern Hardware
V. Leis, C. Dietrich

https://www.vldb.org/pvldb/vol17/p2115-leis.pdf
FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI

We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematics -- from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.

https://arxiv.org/abs/2411.04872
https://doisinkidney.com/posts/2024-11-08-formalising-graphs-coinduction.html

Formalising Graphs with Coinduction
D. Kidney, N. Wu

All that said, I think the representation we ended up with in the paper is quite nice. We start with a similar representation to the one we had in our ICFP paper in 2021: a graph over vertices of type a is simply a function a -> [a] that returns the neighbours of a supplied vertex (this is the same representation as in this post). Despite the simplicity, it turns out that this type is enough to implement a decent number of search algorithms. The really interesting thing is that the arrow methods (from Control.Arrow) work on this type, and they define an algebra on graphs similar to the one from Mokhov (2017). For example, the <+> operator is the same as the overlay operation in Mokhov (2017).

That simple type gets expanded upon and complicated: eventually, we represent a possibly-infinite collection as a function that takes a depth and then returns everything in the search space up to that depth.
The Denotational Semantics of SSA

J. Ghalayini, N. Krishnaswami

Static single assignment form, or SSA, has been the dominant compiler intermediate representation for decades. In this paper, we give a type theory for a variant of SSA, including its equational theory, which are strong enough to validate a variety of control and data flow transformations. We also give a categorical semantics for SSA, and show that the type theory is sound and complete with respect to the categorical axiomatization. We demonstrate the utility of our model by exhibiting a variety of concrete models satisfying our axioms, including in particular a model of TSO weak memory. The correctness of the syntactic metatheory, as well as the completeness proof has been mechanized in the Lean proof assistant.

https://arxiv.org/abs/2411.09347
Shades of Iteration: from Elgot to Kleene
Sergey Goncharov

http://wadt2022.web.ua.pt/WADTSlides/SergeyGoncharov.pdf
Defunctionalized Interpreters for Programming Languages

Olivier Danvy

This document illustrates how functional implementations of formal semantics (structural operational semantics, reduction semantics, small-step and big-step abstract machines, natural semantics, and denotational semantics) can be transformed into each other. These transformations were foreshadowed by Reynolds in "Definitional Interpreters for Higher-Order Programming Languages" for functional implementations of denotational semantics, natural semantics, and big-step abstract machines using closure conversion, CPS transformation, and defunctionalization. Over the last few years, the author and his students have further observed that functional implementations of small-step and of big-step abstract machines are related using fusion by fixed-point promotion and that functional implementations of reduction semantics and of small-step abstract machines are related using refocusing and transition compression. It furthermore appears that functional implementations of structural operational semantics and of reduction semantics are related as well, also using CPS transformation and defunctionalization.

https://dl.acm.org/doi/10.1145/1411204.1411206
Forwarded from Alexander Chichigin
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Alexander Chichigin
Ну, грубо говоря, это reimplementation основной части Alloy на Racket. Соответственно, основные способности:
1. декларативно специфицировать реляционные модели (как в Alloy)
2. проводить (bounded) model checking для этих моделей и пользовательских предикатов (как в Alloy)
3. визуализировать примеры инстансов моделей (говорят, что лучше, чем в Alloy)
4. создавать собственный синтаксис описания моделей, в том числе доменно-специфичных (как в Racket)

Как-то так.
The Long Way to Deforestation: A Type Inference and Elaboration Technique for Removing Intermediate Data Structures

Yijia Chen, Lionel Parreaux

Deforestation is a compiler optimization that removes intermediate data structure allocations from functional programs to improve their efficiency. This is an old idea, but previous approaches have proved limited or impractical — they either only worked on compositions of predefined combinators (shortcut fusion), or involved the aggressive unfolding of recursive definitions until a depth limit was reached or a reoccurring pattern was found to tie the recursive knot, resulting in impractical algorithmic complexity and large amounts of code duplication. We present Lumberhack, a general-purpose deforestation approach for purely functional call-by-value programs. Lumberhack uses subtype inference to reason about data structure production and consumption and uses an elaboration pass to fuse the corresponding recursive definitions. It fuses large classes of mutually recursive definitions while avoiding much of the unproductive (and sometimes counter-productive) code duplication inherent in previous approaches.

https://dl.acm.org/doi/10.1145/3674634
Efficient, Portable, Census-Polymorphic Choreographic
Programming

https://arxiv.org/pdf/2412.02107
2025/07/06 00:54:19
Back to Top
HTML Embed Code: