Zero-Setup All-in-One Java Tooling via Mill Bootstrap Scripts
https://mill-build.org/blog/16-zero-setup.html
https://redd.it/1npb789
@r_scala
https://mill-build.org/blog/16-zero-setup.html
https://redd.it/1npb789
@r_scala
Reddit
From the scala community on Reddit: Zero-Setup All-in-One Java Tooling via Mill Bootstrap Scripts
Posted by lihaoyi - 11 votes and 2 comments
Which web framework is the smallest one in terms of JAR size including its dependencies?
For context, I'm looking to build an embedded admin-dashboard-style web server. It will serve its requests on a different port and will be embedded in my PlayFramework (but wanting it to work anywhere else by including a JAR and setting some config code).
I wonder which web framework for Scala or Java is the smallest one in size.
https://redd.it/1npjjd3
@r_scala
For context, I'm looking to build an embedded admin-dashboard-style web server. It will serve its requests on a different port and will be embedded in my PlayFramework (but wanting it to work anywhere else by including a JAR and setting some config code).
I wonder which web framework for Scala or Java is the smallest one in size.
https://redd.it/1npjjd3
@r_scala
Reddit
From the scala community on Reddit
Explore this post and more from the scala community
Scala learning, tutorials, references and general related info. ScalaTut resource.
https://scalatut.greq.me/
https://redd.it/1nqysm8
@r_scala
https://scalatut.greq.me/
https://redd.it/1nqysm8
@r_scala
scalatut.greq.me
ScalaTut - Learn Scala Programming
Learn Scala programming with comprehensive tutorials, examples, and real-world applications. From basics to advanced concepts.
ldbc v0.4.0 is out 🎉
# ldbc v0.4.0 is released with built-in connection pooling for the Pure Scala MySQL connector!
TL;DR: Pure Scala MySQL connector that runs on JVM, Scala.js, and Scala Native now includes connection pooling designed specifically for Cats Effect's fiber-based concurrency model.
We're excited to announce the release of ldbc v0.4.0, bringing major enhancements to our Pure Scala MySQL connector that works across JVM, Scala.js, and Scala Native platforms.
The highlight of this release is the built-in connection pooling for our Pure Scala connector, eliminating the need for external libraries like HikariCP while providing superior performance optimized for Cats Effect's fiber-based concurrency model.
https://github.com/takapi327/ldbc/releases/tag/v0.4.0
# Major New Features
The highlight of this release is the built-in connection pooling for our Pure Scala connector, providing a pooling solution specifically optimized for Cats Effect's fiber-based concurrency model.
## 🏊 Built-in Connection Pooling
A connection pool designed specifically for Cats Effect applications:
CircuitBreaker for automatic failure handling
Adaptive pool sizing based on load patterns
Connection leak detection for development
Comprehensive metrics tracking
Before/After hooks for connection lifecycle management
This gives you the flexibility to choose the pooling strategy that best fits your application's needs.
## 📊 Stream Support with fs2
Efficiently handle large datasets without memory overhead:
import fs2.Stream
import ldbc.dsl.
val cities: StreamIO, City =
sql"SELECT FROM city WHERE population > $minPop"
.query[City]
.stream(fetchSize = 1000)
.readOnly(connector)
## 🔄 New MySQLDataSource API
A cleaner, more intuitive API replacing the old ConnectionProvider:
// Simple connection
val dataSource = MySQLDataSource
.build[IO]("localhost", 3306, "user")
.setPassword("password")
.setDatabase("mydb")
// With connection pooling
val pooled = MySQLDataSource.pooling[IO](
MySQLConfig.default
.setHost("localhost")
.setPort(3306)
.setUser("user")
.setPassword("password")
.setDatabase("mydb")
.setMinConnections(5)
.setMaxConnections(20)
)
pooled.use { pool =>
val connector = Connector.fromDataSource(pool)
// Execute your queries
}
# Why ldbc?
✅ 100% Pure Scala \- No JDBC dependency required
✅ True cross-platform \- Single codebase for JVM, JS, and Native
✅ Fiber-native design \- Built from the ground up for Cats Effect
✅ Resource-safe \- Leverages Cats Effect's Resource management
✅ Flexible deployment \- Use with or without connection pooling
# Links
Github: [https://github.com/takapi327/ldbc](https://github.com/takapi327/ldbc)
Documentation: https://takapi327.github.io/ldbc/
Scaladex: [https://index.scala-lang.org/takapi327/ldbc](https://index.scala-lang.org/takapi327/ldbc)
Migration Guide: https://takapi327.github.io/ldbc/migration-notes.html
https://redd.it/1nu7et9
@r_scala
# ldbc v0.4.0 is released with built-in connection pooling for the Pure Scala MySQL connector!
TL;DR: Pure Scala MySQL connector that runs on JVM, Scala.js, and Scala Native now includes connection pooling designed specifically for Cats Effect's fiber-based concurrency model.
We're excited to announce the release of ldbc v0.4.0, bringing major enhancements to our Pure Scala MySQL connector that works across JVM, Scala.js, and Scala Native platforms.
The highlight of this release is the built-in connection pooling for our Pure Scala connector, eliminating the need for external libraries like HikariCP while providing superior performance optimized for Cats Effect's fiber-based concurrency model.
https://github.com/takapi327/ldbc/releases/tag/v0.4.0
# Major New Features
The highlight of this release is the built-in connection pooling for our Pure Scala connector, providing a pooling solution specifically optimized for Cats Effect's fiber-based concurrency model.
## 🏊 Built-in Connection Pooling
A connection pool designed specifically for Cats Effect applications:
CircuitBreaker for automatic failure handling
Adaptive pool sizing based on load patterns
Connection leak detection for development
Comprehensive metrics tracking
Before/After hooks for connection lifecycle management
This gives you the flexibility to choose the pooling strategy that best fits your application's needs.
## 📊 Stream Support with fs2
Efficiently handle large datasets without memory overhead:
import fs2.Stream
import ldbc.dsl.
val cities: StreamIO, City =
sql"SELECT FROM city WHERE population > $minPop"
.query[City]
.stream(fetchSize = 1000)
.readOnly(connector)
## 🔄 New MySQLDataSource API
A cleaner, more intuitive API replacing the old ConnectionProvider:
// Simple connection
val dataSource = MySQLDataSource
.build[IO]("localhost", 3306, "user")
.setPassword("password")
.setDatabase("mydb")
// With connection pooling
val pooled = MySQLDataSource.pooling[IO](
MySQLConfig.default
.setHost("localhost")
.setPort(3306)
.setUser("user")
.setPassword("password")
.setDatabase("mydb")
.setMinConnections(5)
.setMaxConnections(20)
)
pooled.use { pool =>
val connector = Connector.fromDataSource(pool)
// Execute your queries
}
# Why ldbc?
✅ 100% Pure Scala \- No JDBC dependency required
✅ True cross-platform \- Single codebase for JVM, JS, and Native
✅ Fiber-native design \- Built from the ground up for Cats Effect
✅ Resource-safe \- Leverages Cats Effect's Resource management
✅ Flexible deployment \- Use with or without connection pooling
# Links
Github: [https://github.com/takapi327/ldbc](https://github.com/takapi327/ldbc)
Documentation: https://takapi327.github.io/ldbc/
Scaladex: [https://index.scala-lang.org/takapi327/ldbc](https://index.scala-lang.org/takapi327/ldbc)
Migration Guide: https://takapi327.github.io/ldbc/migration-notes.html
https://redd.it/1nu7et9
@r_scala
GitHub
Release v0.4.0 · takapi327/ldbc
ldbc v0.4.0 is released. 🎉
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.
Noteldbc is pre-1.0 software and is still undergoing act...
This release includes new feature additions, enhancements to existing features, disruptive changes and much more.
Noteldbc is pre-1.0 software and is still undergoing act...
This week in #Scala (Sep 29, 2025)
https://open.substack.com/pub/thisweekinscala/p/this-week-in-scala-sep-29-2025?r=8f3fq&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
https://redd.it/1ntnhv9
@r_scala
https://open.substack.com/pub/thisweekinscala/p/this-week-in-scala-sep-29-2025?r=8f3fq&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
https://redd.it/1ntnhv9
@r_scala
Substack
This week in #Scala (Sep 29, 2025)
Welcome to the new edition of #ThisWeekInScala!
Getting Zionomicon
Has anyone been able to get Zionomicon recently through the official website (https://www.zionomicon.com)? I’ve filled out the form 4 times over the past month using 3 different email addresses, but I still haven’t received anything. I made sure the communication checkbox was checked. I even contacted Ziverge - they just advised me to try again and then went silent after I followed up. Is there any other way to get it?
https://redd.it/1nvgaxd
@r_scala
Has anyone been able to get Zionomicon recently through the official website (https://www.zionomicon.com)? I’ve filled out the form 4 times over the past month using 3 different email addresses, but I still haven’t received anything. I made sure the communication checkbox was checked. I even contacted Ziverge - they just advised me to try again and then went silent after I followed up. Is there any other way to get it?
https://redd.it/1nvgaxd
@r_scala
Event Journal Corruption Frequency — Looking for Insights
I’ve been working with Scala/Akka for several years on a large-scale logistics platform, where we lean heavily on event sourcing. Event journals give us all the things we value: fast append-only writes, immutable history, and natural alignment with the actor model (each entity maps neatly to a real-world package, and failures are isolated per actor).
That said, our biggest concern is the integrity of the event journal. If it becomes corrupted, recovery can be very painful. In the past 5 years, we’ve had two major incidents while using Cassandra (Datastax) as the persistence backend:
1. Duplicate sequence numbers – An actor tried to recover from the database, didn’t see existing data, and started writing from sequence 1 again. This led to duplicates and failure on recovery. The root cause coincided with a Datastax data center incident (disk exhaustion). I even posted to the Akka forum about this incident: https://discuss.akka.io/t/corrupted-event-journal-in-akka-persistence/10728
2. Missing sequence numbers – We had a case where a sequence number vanished (e.g., events 1,2,3,5,6 but 4 missing), which also prevented recovery.
Two incidents over five years is not exactly frequent, but both required manual intervention: editing/deleting rows in the journal and related Akka tables. The fixes were painful, and it shook some confidence in event sourcing as “bulletproof.”
My questions to the community:
1. Datastore reliability – Is this primarily a datastore/vendor issue (Cassandra quirks) or would a relational DB (e.g., Postgres) also occasionally corrupt journals? For those running large event-sourced systems in production with RDBMS, how often do you see corruption?
2. Event journal guarantees – Conceptually, event sourcing is very solid, but these incidents make me wonder: is this just the price of relying on eventually consistent, log-structured DBs, or is it more about making the right choice of backend?
Would really appreciate hearing experiences from others running event-sourced systems in production - particularly around how often journal corruption has surfaced, and whether certain datastores are more trustworthy in practice.
https://redd.it/1nvrc9g
@r_scala
I’ve been working with Scala/Akka for several years on a large-scale logistics platform, where we lean heavily on event sourcing. Event journals give us all the things we value: fast append-only writes, immutable history, and natural alignment with the actor model (each entity maps neatly to a real-world package, and failures are isolated per actor).
That said, our biggest concern is the integrity of the event journal. If it becomes corrupted, recovery can be very painful. In the past 5 years, we’ve had two major incidents while using Cassandra (Datastax) as the persistence backend:
1. Duplicate sequence numbers – An actor tried to recover from the database, didn’t see existing data, and started writing from sequence 1 again. This led to duplicates and failure on recovery. The root cause coincided with a Datastax data center incident (disk exhaustion). I even posted to the Akka forum about this incident: https://discuss.akka.io/t/corrupted-event-journal-in-akka-persistence/10728
2. Missing sequence numbers – We had a case where a sequence number vanished (e.g., events 1,2,3,5,6 but 4 missing), which also prevented recovery.
Two incidents over five years is not exactly frequent, but both required manual intervention: editing/deleting rows in the journal and related Akka tables. The fixes were painful, and it shook some confidence in event sourcing as “bulletproof.”
My questions to the community:
1. Datastore reliability – Is this primarily a datastore/vendor issue (Cassandra quirks) or would a relational DB (e.g., Postgres) also occasionally corrupt journals? For those running large event-sourced systems in production with RDBMS, how often do you see corruption?
2. Event journal guarantees – Conceptually, event sourcing is very solid, but these incidents make me wonder: is this just the price of relying on eventually consistent, log-structured DBs, or is it more about making the right choice of backend?
Would really appreciate hearing experiences from others running event-sourced systems in production - particularly around how often journal corruption has surfaced, and whether certain datastores are more trustworthy in practice.
https://redd.it/1nvrc9g
@r_scala
Discussion Forum for Akka.
Corrupted Event Journal in Akka Persistence
Dear Akka Community, We’ve encountered a JournalFailureException during entity recoveries in our production environment, and we suspect it might be linked to an outage in our Cassandra provider, DataStax. We’re uncertain if such behavior is possible in Akka…
Hearth 0.1.0 - the first release of a library that tries to make macros easier
https://github.com/MateuszKubuszok/hearth/releases/tag/0.1.0
https://redd.it/1nwt7zd
@r_scala
https://github.com/MateuszKubuszok/hearth/releases/tag/0.1.0
https://redd.it/1nwt7zd
@r_scala
GitHub
Release 0.1.0 - Inintial release · MateuszKubuszok/hearth
It took longer than I hoped but 0.1.0 is live and with some scaffolded documentation.
There are some gaps in the documentation, e.g. no examples for utilities usage, no mention of sources and debug...
There are some gaps in the documentation, e.g. no examples for utilities usage, no mention of sources and debug...
Make Illegal AI Edits Unrepresentable
https://www.youtube.com/watch?v=sPjHsMGKJSI
https://redd.it/1ny6psu
@r_scala
https://www.youtube.com/watch?v=sPjHsMGKJSI
https://redd.it/1ny6psu
@r_scala
YouTube
Make Illegal AI Edits Unrepresentable
In these strange times when we unleashed an army of AI tools on our codebases, we are faced with ever increasing challenges in keeping our code maintainable and free of errors. One of the best tools that I know of to help combat the weaknesses of both humans…
Fullstack (scala3+scalajs) stack recommendation
I'm looking for some recommendation for a stack for fullstack app. It should include cats-effect as Im comfortable working with effects. I want to be able to interact with existing react libraries like react-flow (I'm fine if some parts are less typed or i need to define some types myself etc.). If there is some state managment or something that's fine too.
Something that's simple and works well FE/BE wise, the less npm and other FE specific tooling is required the better.
If I can define just one trait and get FE client and implement BE logic that'd be best (I don't care about "niceness of REST endpoints etc, any RPC will do"). The more ergonomic it is for me as scala dev the better.
It's going to be my personal app maintained by single person only for my needs, so there are no requirements such as "nice openapi generation" and other stuff that beats you down at work.
https://redd.it/1nzgp05
@r_scala
I'm looking for some recommendation for a stack for fullstack app. It should include cats-effect as Im comfortable working with effects. I want to be able to interact with existing react libraries like react-flow (I'm fine if some parts are less typed or i need to define some types myself etc.). If there is some state managment or something that's fine too.
Something that's simple and works well FE/BE wise, the less npm and other FE specific tooling is required the better.
If I can define just one trait and get FE client and implement BE logic that'd be best (I don't care about "niceness of REST endpoints etc, any RPC will do"). The more ergonomic it is for me as scala dev the better.
It's going to be my personal app maintained by single person only for my needs, so there are no requirements such as "nice openapi generation" and other stuff that beats you down at work.
https://redd.it/1nzgp05
@r_scala
Reddit
From the scala community on Reddit
Explore this post and more from the scala community
Scala 2.13.17 is here!
2.13.17 improves compatibility with JDK 25 LTS, supports Scala 3.7, improves Scala 3 compatibility and migration, and more.
It also has a few minor potentially breaking changes.
For details, refer to the release notes on GitHub: https://github.com/scala/scala/releases/tag/v2.13.17
https://redd.it/1o02uyc
@r_scala
2.13.17 improves compatibility with JDK 25 LTS, supports Scala 3.7, improves Scala 3 compatibility and migration, and more.
It also has a few minor potentially breaking changes.
For details, refer to the release notes on GitHub: https://github.com/scala/scala/releases/tag/v2.13.17
https://redd.it/1o02uyc
@r_scala
GitHub
Release Scala 2.13.17 · scala/scala
The Scala team at Akka is pleased to announce Scala 2.13.17.
This release is compatible with the new JDK 25 LTS.
The following are highlights of this release:
Compatibility
JDK 25 support in optim...
This release is compatible with the new JDK 25 LTS.
The following are highlights of this release:
Compatibility
JDK 25 support in optim...
Scala 2/3 + Slick cursor based pagination library
I've just open sourced my (in my opinion) pretty developer friendly library to implement cursor/keyset based pagination with Slick. It has a modular architecture to support encoding/decoding though initially only play-json + Base64. Things like other codecs or cursor signing/encryption/compression can be easily implemented. (Contributions welcome)
Here's the library for people who don't like reading: https://github.com/DevNico/slick-seeker
Following is just some backstory
The first version of this is over a year old and has been "battle tested" in a production environment with a few thousand users. Initially the API was a little more cumbersome and you had to define both the query extractor and the result-set extractor in the
Since the backend is Scala 3 the first version also used Scala 3 specific syntax e.g. Givens extensions methods etc and wasn't really re-usable. I've decided to rewrite it to support Scala 2 and took inspiration from slick-pg's (also a great library) way of including the functionality by creating your own Profile wrapper.
Please let me know what you think / give me your ideas for improvements!
https://redd.it/1o085nx
@r_scala
I've just open sourced my (in my opinion) pretty developer friendly library to implement cursor/keyset based pagination with Slick. It has a modular architecture to support encoding/decoding though initially only play-json + Base64. Things like other codecs or cursor signing/encryption/compression can be easily implemented. (Contributions welcome)
Here's the library for people who don't like reading: https://github.com/DevNico/slick-seeker
Following is just some backstory
The first version of this is over a year old and has been "battle tested" in a production environment with a few thousand users. Initially the API was a little more cumbersome and you had to define both the query extractor and the result-set extractor in the
.seek
function. I've streamlined this so you just define the query extractor. All of which then get appended to the final db query and auto extracted from there. This does add minimal overhead but the improved ergonomics outweigh the "cost" by far. This also allows usage of any computed expressions (but beware since this might tank performance if it can't be / isn't indexed properly).Since the backend is Scala 3 the first version also used Scala 3 specific syntax e.g. Givens extensions methods etc and wasn't really re-usable. I've decided to rewrite it to support Scala 2 and took inspiration from slick-pg's (also a great library) way of including the functionality by creating your own Profile wrapper.
Please let me know what you think / give me your ideas for improvements!
https://redd.it/1o085nx
@r_scala
GitHub
GitHub - DevNico/slick-seeker: Type-safe, high-performance cursor-based pagination for Slick with modular cursor codec support
Type-safe, high-performance cursor-based pagination for Slick with modular cursor codec support - DevNico/slick-seeker
An Omakase-style PlayFramework Template: PlayFast
https://tanin.nanakorn.com/an-omakase-style-playframework-template-playfast/
https://redd.it/1o0lc2a
@r_scala
https://tanin.nanakorn.com/an-omakase-style-playframework-template-playfast/
https://redd.it/1o0lc2a
@r_scala
tanin
An Omakase-style PlayFramework Template: PlayFast
I'm excited to share my PlayFramework Template which has been iterated over the past decade.
PlayFast has all the needed components that make you productive and ready for production. You should be able to clone, run locally, and deploy within minutes!
…
PlayFast has all the needed components that make you productive and ready for production. You should be able to clone, run locally, and deploy within minutes!
…
Built a Slack bot with ZIO - learned a ton about fiber interruption and WebSocket management
Hey everyone! I've been tinkering with ZIO for a few months and decided to build a Slack bot just to see what I could learn. Not sure if anyone will find this interesting, but I had a blast working through some tricky problems and wanted to share.
What it does: It's a Socket Mode Slack bot that connects LLMs (Ollama, OpenAI, etc.) to Slack threads. Nothing groundbreaking, but it was a fun way to explore some ZIO patterns.
Two things I'm kinda proud of:
1. Speculative execution with fiber interruption
The idea is that most LLMs we're used to working with prevent the user from sending a new message while they work. Well, Slack doesn't work like that. So trying to figure out a natural way for folks to interact with an LLM... it wasn't as straightforward as I wanted.
If someone sends a message while the LLM is still generating a response to their previous message, the bot cancels the old request and starts fresh with the latest context. I used sliding queues (capacity 1) per thread - newer messages just push out the old ones.
The tricky part was getting a monitor fiber to detect when a newer message arrives and interrupt the LLM fiber. Took me a while to wrap my head around ZIO's interruption model, but once it clicked.... No wasted API calls, users always get responses to their latest message.
2. WebSocket connection management
Slack's Socket Mode (which is all very ... special) requires persistent WebSocket connections, and I wanted to handle reconnections gracefully. Built a little connection pool with health monitoring - tracks connection state (ok/degrading/closed), automatically reconnects on failure, and records everything with OpenTelemetry.
The pattern of using Ref for connection state + scheduled health checks felt very "ZIO-ish" to me. Not sure if I'm doing it right, but it seems to work!
Other stuff I learned:
Hub-based event broadcasting (dumb broadcaster, smart subscribers)
FiberRef for logging context propagation
ZIO Metric API → OpenTelemetry bridging
Scoped resource management (no leaked WebSocket connections!)
I probably over-engineered parts of it (event-driven architecture for a simple bot?), but I wanted to practice the patterns from Zionomicon.
Code is here if anyone's curious: https://github.com/Nestor10/fishy-zio-http-slackbot
Would love any feedback, especially if I'm doing something obviously wrong! Still learning this functional stuff and ZIO has been a fun (if occasionally humbling) journey.
TLDR: Made a Slack bot with ZIO, learned about fiber interruption for canceling stale LLM requests and WebSocket pool management. Probably over-engineered it but had fun!
https://redd.it/1o1j4fc
@r_scala
Hey everyone! I've been tinkering with ZIO for a few months and decided to build a Slack bot just to see what I could learn. Not sure if anyone will find this interesting, but I had a blast working through some tricky problems and wanted to share.
What it does: It's a Socket Mode Slack bot that connects LLMs (Ollama, OpenAI, etc.) to Slack threads. Nothing groundbreaking, but it was a fun way to explore some ZIO patterns.
Two things I'm kinda proud of:
1. Speculative execution with fiber interruption
The idea is that most LLMs we're used to working with prevent the user from sending a new message while they work. Well, Slack doesn't work like that. So trying to figure out a natural way for folks to interact with an LLM... it wasn't as straightforward as I wanted.
If someone sends a message while the LLM is still generating a response to their previous message, the bot cancels the old request and starts fresh with the latest context. I used sliding queues (capacity 1) per thread - newer messages just push out the old ones.
The tricky part was getting a monitor fiber to detect when a newer message arrives and interrupt the LLM fiber. Took me a while to wrap my head around ZIO's interruption model, but once it clicked.... No wasted API calls, users always get responses to their latest message.
2. WebSocket connection management
Slack's Socket Mode (which is all very ... special) requires persistent WebSocket connections, and I wanted to handle reconnections gracefully. Built a little connection pool with health monitoring - tracks connection state (ok/degrading/closed), automatically reconnects on failure, and records everything with OpenTelemetry.
The pattern of using Ref for connection state + scheduled health checks felt very "ZIO-ish" to me. Not sure if I'm doing it right, but it seems to work!
Other stuff I learned:
Hub-based event broadcasting (dumb broadcaster, smart subscribers)
FiberRef for logging context propagation
ZIO Metric API → OpenTelemetry bridging
Scoped resource management (no leaked WebSocket connections!)
I probably over-engineered parts of it (event-driven architecture for a simple bot?), but I wanted to practice the patterns from Zionomicon.
Code is here if anyone's curious: https://github.com/Nestor10/fishy-zio-http-slackbot
Would love any feedback, especially if I'm doing something obviously wrong! Still learning this functional stuff and ZIO has been a fun (if occasionally humbling) journey.
TLDR: Made a Slack bot with ZIO, learned about fiber interruption for canceling stale LLM requests and WebSocket pool management. Probably over-engineered it but had fun!
https://redd.it/1o1j4fc
@r_scala
GitHub
GitHub - Nestor10/fishy-zio-http-slackbot: A modern, type-safe Slack bot framework built with ZIO and zio-http, featuring thread…
A modern, type-safe Slack bot framework built with ZIO and zio-http, featuring thread-centric conversation management. - Nestor10/fishy-zio-http-slackbot
Hiring a new Scala Software Engineer with TypeLevel experience, Full Remote ($87K – $138K)
https://jobs.ashbyhq.com/chilipiper/ab556557-83cf-467d-90fb-5119dabf146c?utm_source=21Bax0GEqN
* Full remote
* Our stack is Scala, Cats Effect, microservices, GCP, Postgres, Kafka
* I'll be happy to answer any questions
The salary range for this role is between $87K – $138K • Offers Equity • Final compensation is determined by experience, skills, and location
# About Chili Piper
Chili Piper is a B2B SaaS startup. Our product helps clients turn inbound leads into qualified meetings instantly, helping revenue teams connect to buyers faster.
https://redd.it/1o2cgby
@r_scala
https://jobs.ashbyhq.com/chilipiper/ab556557-83cf-467d-90fb-5119dabf146c?utm_source=21Bax0GEqN
* Full remote
* Our stack is Scala, Cats Effect, microservices, GCP, Postgres, Kafka
* I'll be happy to answer any questions
The salary range for this role is between $87K – $138K • Offers Equity • Final compensation is determined by experience, skills, and location
# About Chili Piper
Chili Piper is a B2B SaaS startup. Our product helps clients turn inbound leads into qualified meetings instantly, helping revenue teams connect to buyers faster.
https://redd.it/1o2cgby
@r_scala
Ashbyhq
Scala Engineer
ABOUT CHILI PIPER
Chili Piper is a B2B SaaS startup. Our product helps clients turn inbound leads into qualified meetings instantly, helping revenue teams connect to buyers faster. We are a fully remote global team dedicated to solving interesting problems…
Chili Piper is a B2B SaaS startup. Our product helps clients turn inbound leads into qualified meetings instantly, helping revenue teams connect to buyers faster. We are a fully remote global team dedicated to solving interesting problems…
I compiled the fundamentals of two big subjects, computers and electronics in two decks of playing cards. Check the last two images too [OC]
https://redd.it/1o2cb06
@r_scala
https://redd.it/1o2cb06
@r_scala
Reddit
From the scala community on Reddit: I compiled the fundamentals of two big subjects, computers and electronics in two decks of…
Explore this post and more from the scala community