Telegram Web Link
Я уже писал про книжку Камкина?
https://dmkpress.com/catalog/computer/software_development/978-5-93700-340-9/

На трёхстах (300) страницах автор обозревает очень широкий круг тем верификации программ:
— основы семантики программ и дедуктивной верификации a la Флойд-Хоар-Дейкстра, включая Frama-C и Why3
— теоретически неразрешимая, но практически крайне интересная проблема автоматического синтеза инвариантов циклов, включая использование абстрактной интерпретации
— основы SAT/SMT-решателей и алгоритм DPLL
— конкурентные программы, LTL и Promela/SPIN
— автоматы Бюхи, их связь с LTL и использование для проверки моделей
— символьная проверка моделей (за которую дали премию Тьюринга) и NuSMV
— плюс использование формальных методов в тестировании программ чтобы читатель всё-таки узнал что-то полезное 😁
👍3
Определённо, я уже рассказывал про книжку Рустана нашего Лейно:
https://dmkpress.com/catalog/computer/software_development/978-5-93700-199-3/

Но мне не стыдно вспоминать про неё каждый раз. В конце концов, её написал Рустан Лейно — в принципе, этого уже достаточно. 😁
👏1
gemma3:12b for ollama weighs in about 8.1 GiB but on inference uses only about 4 GiB VRAM.
🔥1
gemma3:12b-it-qat for ollama weighs in about 8.9 GiB (surprisingly) and on inference uses about 3.6 GiB VRAM.
deepseek-r1:14b for ollama weighs in about 9.0 GiB and on inference uses about 6.5 GiB VRAM.
https://www.kickstarter.com/projects/crowleyaudio/lovecraft-investigations-crowley

The folks want to deeply investigate the life and "work" of the infamous Aleister Crowley and present it in the form of a podcast.

I still don't know why people call "podcast" any old radio play or other kind of broadcast that they publish on the Internet. I guess that's the term now. Anyways, Crowley had a mind-boggling biography — had the luck of being just the right kind of crazy in the time ripe for that kind of craze — which would be entertaining to learn.

Besides, they say the previous "seasons" of The Lovecraft Investigations produced in partnership with the BBC are terrific in all the meanings of the word. I'll check them out too.
1👍1👌1
mistral-small3.1 for ollama weighs in about 15GiB and runs on CPU for some reason...
😁3
Technically, Google search works faster than asking a local LLM, even considering the results page load time. But while you're clicking through "prove you're a human", "consent to our spying cookies" and all that crap between you and whatever Google have found... The local LLM already spit out half the answer...
1👍1🔥1😁1💯1
Proposing a new intelligence metric to replace the IQ score:

LLM-equivalent

The size of an LLM that's about as smart as you. Measured in the number of weights, as per usual. Tens of billions, if you're lucky. 😏
💯2👎1
http://pl.ewi.tudelft.nl/seminar/2025/05/07/anuyts/
Extensible types as unknown bialgebras

Andreas Nuyts invents something crazy in the best sense of the word! Unfortunately, he haven't published anything on this topic yet...
1👏1
https://dl.acm.org/doi/10.1145/3694848.3694852
Towards Verification of a Denotational Semantics of Inheritance

Peter Mosses mechanizes in Agda (sic!) the proofs from the seminal OOPSLA '89 paper by Jens Palsberg and William Cook on the (fixpoint) semantics of inheritance.

That's fucking awesome!!! 🔥
In the crazy world of Web GIS systems, you download SAT images from a cloud and then remove clouds from the images... 😂
🔥1🤡1
/set parameter num_gpu 48 in ollama for gemma3:12b supposedly sends all the layers to a GPU, which occupies 7 GiB VRAM and bumps generation rate from about 5 tokens/sec to about 18 tokens/sec.
👏2
NAME              SIZE     PROCESSOR        EVAL RATE
deepseek-r1:8b 6.9 GB 100% GPU 50.70 tokens/s
gemma3_12b_opt 13 GB 46%/54% CPU/GPU 17.72 tokens/s
deepseek-r1:14b 11 GB 33%/67% CPU/GPU 6.35 tokens/s

Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz
32 GiB RAM
NVIDIA GeForce RTX 3070 Laptop GPU
8 GiB VRAM
mistral-small3.1 does not run on Nvidia RTX 3070 even if you're explicitly trying to make it do so. Probably the dated GPU doesn't support some functions.
🤣1
One undisputable advantage of "code copilots", they nudge developers into writing detailed understandable comments.
😁4
TIL: in Julia type names are written in capital (or camel) case, but that's simply the code style. The interpreter happily accept and work with low (or snake) case type names. I couldn't find it anywhere in the docs and had to test manually.
🤯2🐳1
Древнее зло пробудилось, но ещё не раскачалось. Никаких разрушений до первой кружки кофе!
😁2
When they say "AI makes you more productive" they mean "AI makes more productive the people why can make the thing on their own". If you can't make it without AI you don't become more productive, you're as clueless as before.
👍1👏1
2025/07/14 17:20:28
Back to Top
HTML Embed Code: