Lightbend Activator

Hello Slick! (Slick 3.1)

Activator will be EOL-ed on May 24, 2017.

We’re making it easier and simpler for developers to get started with Lightbend technologies. This unfortunately means that future releases of Play, Akka and Scala will no longer include Activator support, and Lightbend’s Activator server will be decommissioned by the end of 2017. Instead of supporting Activator to create and set up development projects, we'll be supporting standard Giter8 templates for sbt users and Maven archetypes for Maven users. So going forward,

To create new Lightbend projects

Instead of using the Activator command, make sure you have sbt 0.13.13 (or higher), and use the “sbt new” command, providing the name of the template. For example, “$ sbt new akka/hello-akka.g8”. You can find a list of templates here.

Also, as a convenience, the Lightbend Project Starter allows you to quickly create a variety of example projects that you just unzip and run.

To create new templates

If you want to create new templates, you can now do that in Giter8.

To migrate templates from Activator to Giter8

If you created Activator templates in the past, please consider migrating them to Giter8 with this simple process.

Hello Slick! (Slick 3.1)

September 28, 2015
basics slick starter reactive scala

Slick is Typesafe's modern database query and access library for Scala. It allows you to work with stored data almost as if you were using Scala collections while at the same time giving you full control over when a database access happens and which data is transferred. You can also use SQL directly. This tutorial will get you started with a simple standalone Scala application that uses Slick.

How to get "Hello Slick! (Slick 3.1)" on your computer

There are several ways to get this template.

Option 1: Choose hello-slick-3.1 in the Lightbend Activator UI.

Already have Lightbend Activator (get it here)? Launch the UI then search for hello-slick-3.1 in the list of templates.

Option 2: Download the hello-slick-3.1 project as a zip archive

If you haven't installed Activator, you can get the code by downloading the template bundle for hello-slick-3.1.

  1. Download the Template Bundle for "Hello Slick! (Slick 3.1)"
  2. Extract the downloaded zip file to your system
  3. The bundle includes a small bootstrap script that can start Activator. To start Lightbend Activator's UI:

    In your File Explorer, navigate into the directory that the template was extracted to, right-click on the file named "activator.bat", then select "Open", and if prompted with a warning, click to continue:

    Or from a command line:

     C:\Users\typesafe\hello-slick-3.1> activator ui 
    This will start Lightbend Activator and open this template in your browser.

Option 3: Create a hello-slick-3.1 project from the command line

If you have Lightbend Activator, use its command line mode to create a new project from this template. Type activator new PROJECTNAME hello-slick-3.1 on the command line.

Option 4: View the template source

The creator of this template maintains it at

Option 5: Preview the tutorial below

We've included the text of this template's tutorial below, but it may work better if you view it inside Activator on your computer. Activator tutorials are often designed to be interactive.

Preview the tutorial

Intro to Slick

Slick is a Functional Relational Mapping (FRM) library for Scala where you work with relational data in a type-safe and functional way. Here is an example:

coffees.filter(_.price < 10.0).map(
This will produce a query equivalent to the following SQL:
select COF_NAME from COFFEES where PRICE < 10.0
Developers benefit from the type-safety and composability of FRM as well as being able to reuse the typical Scala collection APIs like filter, map, groupBy, etc. This template will get you started learning Slick using a working application. Continue the tutorial to learn about how to run the application, run the tests, and explore the basics of Slick.

Run the App and the Tests

This template includes a simple Scala application, HelloSlick.scala, that does basic FRM operations with Slick. This application automatically runs when Activator is started and then re-runs after every successful compile. You can see the output in Run. Note: The example code in this app has intentionally verbose type information. In normal applications type inference is used more extensively but to assist with learning the type information was included.

The TablesSuite.scala file contains ScalaTest tests which do some basic integration tests. Check out the test results in Test.

Slick is a library that is easy to include in any project. This project uses the sbt build tool so the dependency for Slick is specified in the build.sbt file. To make things simple this project uses an embedded H2 in-memory database.

Learn more about connecting to databases in the Slick docs.

Schema / Table Mapping

The Tables.scala file contains the mappings for a Suppliers and a Coffees table. These Table create a mapping between a database table and a class. The table's columns are also mapped to functions. This mapping is called Lifted Embedding since the types of a column mappings are not the actual column value's type, but a wrapper type. For a column that contains a Double value the type of mapping will be Rep[Double]. This enables type-safe queries to be built around meta-data and then executed against the database.

Using a table mapping object requires creating a TableQuery instance for the Table classes. For example, in HelloSlick.scala the suppliers val is the TableQuery instance for the Suppliers class.

Learn more about mapping tables and columns in the Slick docs.

Database Connection

Database connections are usually configured via Typesafe Config in your application.conf, which is also used by Play and Akka for their configuration:

h2mem1 = {
  url = "jdbc:h2:mem:test1"
  driver = org.h2.Driver
  connectionPool = disabled
  keepAliveConnection = true

The default connection pool is HikariCP. Since a connection pool is not necessary for an embedded H2 database, we disable it here. When you use a real, external database server, the connection pool provides improved performance and resilience. The keepAliveConnection option (which is only available without a connection pool) keeps an extra connection open for the lifetime of the Database object in the application. It is useful for managing the lifecycle of named in-memory databases which keep their data as long as there are still open connections.

In the body of HelloSlick.scala we create a Database object from the configuration. This causes a thread pool (and usually also a connection pool) to be created in the background. You should always close the Database object at the end to release these resources. HelloSlick is a standalone command-line application, not running inside of a container which takes care of resource management, so we have to do it ourselves. Since all database calls in Slick are asynchronous, we are going to compose Futures throughout the app, but eventually we have to wait for the result. This gives us the following scaffolding:

val db = Database.forConfig("h2mem1")
try {
  val f: Future[_] = {
    // body of the application
  Await.result(f, Duration.Inf)
} finally db.close

If you are not familiar with asynchronous, Future-based programming Scala, you can learn more about Futures and Promises in the Scala documentation.

Creating and Inserting

To create corresponding tables from a mapping you can get the schema via its TableQuery and then call the create method, like:


Multiple schemas can also be combined as in HelloSlick.scala, to create all database entities and links (like foreign key references) in the correct order, even in the presence of cyclic dependencies between tables:

(suppliers.schema ++ coffees.schema).create

The result of .create is a database I/O action which encapsulates the DDL statements.

Creates / Inserts are as simple as appending the values to a TableQuery instance using either the += operator for a single row, or ++= for multiple rows. In HelloSlick.scala both of these ways of doing inserts are used.

Running Database I/O Actions

Like all other database operations, += and ++= return database I/O actions. If you do not care about more advanced features like streaming, effect tracking or extension methods for certain actions, you can denote their type as DBIO[T] (for an operation which will eventually produce a value of type T). Instead of running all actions separately, you can combine them with other actions in various ways. The simplest combinator is DBIO.seq which takes a variable number of actions of any type and combines them into a single DBIO[Unit] that runs the actions in the specified order. We use it in HelloSlick.scala to define setupAction which combines schema creation with some insert actions.

So far we have only staged the operations. We can run them with

val setupFuture: Future[Unit] =

This performs the database calls asynchronously, eventually completing the returned Future.

When inserting data, the database usually returns the number of affected rows, therefore the return type is Option[Int] as can be seen in the definition of insertAction:

val insertAction: DBIO[Option[Int]] = ...

We can use the map combinator to run some code and compute a new value from the value returned by the action (or in this case run it only for its side effects and return Unit):

val insertAndPrintAction: DBIO[Unit] = { coffeesInsertResult =>
  // Print the number of rows inserted
  coffeesInsertResult foreach { numRows =>
    println(s"Inserted $numRows rows into the Coffees table")

Note that map and all other combinators which run user code (e.g. flatMap, cleanup, filter) take an implicit ExecutionContext on which to run this code. Slick uses its own ExecutionContext internally for running blocking database I/O but it always maintains a clean separation and prevents you from running non-I/O code on it.

Querying and Streaming

Queries usually start with a TableQuery instance. In the simplest case you read the contents of an entire table by calling .result directly on the TableQuery to get a DBIO action, as shown in HelloSlick.scala:

val allSuppliersAction: DBIO[Seq[(Int, String, String, String, String, String)]] =

This produces a Seq[(Int, String, String, String, String, String)] that corresponds to the columns defined in the Table mapping. Filtering, sorting, and joining will be covered in the next few sections of the tutorial. We use another new combinator to combine the previously defined insertAndPrintAction with the new allSuppliersAction:

val combinedAction: DBIO[Seq[(Int, String, String, String, String, String)]] =
  insertAndPrintAction >> allSuppliersAction

val combinedFuture: Future[Seq[(Int, String, String, String, String, String)]] =

The >> combinator (also available under the name andThen) runs the second action after the first, similar to DBIO.seq but it does not discard the return value of the second action.

The default query we've been using uses the * method on the Table mapping class. For instance, the suppliers TableQuery uses the * method defined in Tables.scala and returns all of the columns when executed because the * combines all of the columns. Often we just want to select a subset of the columns. To do this use the map method on a query, like:

val coffeeNamesAction: StreamingDBIO[Seq[String], String] =

This will create a new query that when executed just returns the name column. The generated SQL will be something like:


The type annotation above uses the type StreamingDBIO[Seq[String], String] instead of DBIO[Seq[String]] to also allow streaming. The first type parameter denotes the fully materialized result (as in DBIO) whereas the second type parameter is only the element type. Note that these types can always be inferred by the compiler. They are only spelled out explicitly in this tutorial to facilitate understanding. If you have a streaming action, you can use instead of to get a Reactive Streams Publisher instead of a Future. This allows data to be streamed asynchronously from the database with any compatible library like Akka Streams. Slick itself does not provide a full set of tools for working with streams but it has a .foreach utility method for consuming a stream:

val coffeeNamesPublisher: DatabasePublisher[String] =


Note that a database I/O action does not yet start running when you call You must attach a Subscriber to the stream (i.e. start consuming the stream) to actually run the action.

More CRUD Operations

Filtering / adding where statements to a query is done using methods like filter and take on a Query to construct a new query. For example, to create a new query on the Coffees table that selects only rows where the price is higher than 9.0, we use the folling code in HelloSlick.scala:

coffees.filter(_.price > 9.0)

This produces a SQL statement equivalent to:

select * from COFFEES where PRICE > 9.0

Updates are done through a Query object by calling the update method. To update the sales column on all rows of the Suppliers table, create a new query for just that column:

val updateQuery: Query[Column[Int], Int] =

Then call the update with the new value to produce an action that will perform the update. Updates, like inserts, return the number of affected rows:

val updateAction: DBIO[Int] =

Deletes are done by just calling delete on a query to get an action. So to delete coffees with a price less than 8.0, you do:

val deleteQuery: Query[Coffees,(String, Int, Double, Int, Int), Seq] =
  coffees.filter(_.price < 8.0)

val deleteAction = deleteQuery.delete

This will produce SQL equivalent to:

delete from COFFEES where PRICE < 8.0

Query Composition

Sorting / adding order by clauses is done using methods like sortBy on a Query to create a new query. For example in HelloSlick.scala you can see an example sorting of coffees by price:


This would produce SQL equivalent to:

select * from COFFEES order by PRICE

The examples so far have taken a basic TableQuery and used a method to produce a new, more specific query. Due to the functional nature of the query API, this can be done repeatedly to produce more specific queries. For example, to create a query on the Coffees table that sorts them by name, takes the first three rows, filters those with a prices greater than 9.0, and finally just returns the names, simply do:

coffees.sortBy( > 9.0).map(

This results in a new query that has a fairly complex implementation in SQL.


The Coffees table mapping in the Tables.scala file includes a foreign key mapping to the Suppliers table:

foreignKey("SUP_FK", supID, TableQuery[Suppliers])(
To use this foreign key in a joined query it is easiest to use a for comprehension, like:
for {
  c <- coffees if c.price > 9.0
  s <- c.supplier
} yield (,
This creates a new query that gets the coffees with a price greater than 9.0 and then joins them with their suppliers and returns their names.

Computed Values

Computed values like minimum, maximum, summation, and average can be done in the database using the query functions min, max, sum and avg like:
This creates a new Column where you can run the query by calling run to get back the value. Check out the example in HelloSlick.scala.

Plain SQL / String Interpolation

Sometimes writing manual SQL is the easiest and best way to go but we don't want to lose SQL injection protection that Slick includes. SQL String Interpolation provides a nice API for doing this. In HelloSlick.scala we use the sql interpolator:

val state = "CA"
val plainQuery = sql"select SUP_NAME from SUPPLIERS where STATE = $state".as[String]

This produces a database I/O action that can be run or streamed in the usual way.

You can learn more about Slick's Plain SQL queries in the Slick Plain SQL Queries (Slick 3.0) template for Activator.

Case Class Mapping

The CaseClassMapping.scala file provides an example which uses a case class instead of tupled values. Run this example by selecting CaseClassMapping in Run. To use case classes instead of tuples setup a def * projection which transforms the tuple values to and from the case class. For example:

def * = (id.?, name) <> (User.tupled, User.unapply)
This uses the User's tupled function to convert a (Option[Int], String) to a User and the unapply function to do the opposite. Now all of the queries can work with a User instead of the tuples.

Auto-Generated Primary Keys

The Users table mapping in CaseClassMapping.scala defines an id column which uses an auto-incrementing primary key:

def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)

Running Queries

So far you have seen how to get a Seq from a collection-valued query and how to stream individual elements. There are several other useful methods which are shown in QueryActions.scala. They are equally applicable to Lifted Embedding and Plain SQL queries.

Note the use of Compiled in this app. It is used to define a pre-compiled query that can be executed with different parameters without having to recompile the SQL statement each time. This is the prefered way of defining queries in real-world applications. It prevents the (possibly expensive) compilation each time and leads to the same SQL statement (or a small, fixed set of SQL statements) so that the database system can also reuse a previously computed execution plan. As a side-effect, all parameters are automatically turned into bind variables:

val upTo = Compiled { k: Rep[Int] =>
  ts.filter(_.k <= k).sortBy(_.k)

Next Steps

Check out the full Slick manual and API docs.

You can also find more Slick templates, contributed by both, the Slick team and the community, here in Activator.