Lightbend Activator

Slick Plain SQL Queries (Slick 3.0)

Slick Plain SQL Queries (Slick 3.0)

March 30, 2015
reactive basics slick starter sql scala

This tutorial shows you how to use Plain SQL queries with Slick.

How to get "Slick Plain SQL Queries (Slick 3.0)" on your computer

There are several ways to get this template.

Option 1: Choose slick-plainsql-3.0 in the Lightbend Activator UI.

Already have Lightbend Activator (get it here)? Launch the UI then search for slick-plainsql-3.0 in the list of templates.

Option 2: Download the slick-plainsql-3.0 project as a zip archive

If you haven't installed Activator, you can get the code by downloading the template bundle for slick-plainsql-3.0.

  1. Download the Template Bundle for "Slick Plain SQL Queries (Slick 3.0)"
  2. Extract the downloaded zip file to your system
  3. The bundle includes a small bootstrap script that can start Activator. To start Lightbend Activator's UI:

    In your File Explorer, navigate into the directory that the template was extracted to, right-click on the file named "activator.bat", then select "Open", and if prompted with a warning, click to continue:

    Or from a command line:

     C:\Users\typesafe\slick-plainsql-3.0> activator ui 
    This will start Lightbend Activator and open this template in your browser.

Option 3: Create a slick-plainsql-3.0 project from the command line

If you have Lightbend Activator, use its command line mode to create a new project from this template. Type activator new PROJECTNAME slick-plainsql-3.0 on the command line.

Option 4: View the template source

The creator of this template maintains it at

Option 5: Preview the tutorial below

We've included the text of this template's tutorial below, but it may work better if you view it inside Activator on your computer. Activator tutorials are often designed to be interactive.

Preview the tutorial

Plain SQL Queries

Slick is a Functional Relational Mapping (FRM) library for Scala where you work with relational data in a type-safe and functional way. Here is an example:

coffees.filter(_.price < 10.0).map(

This will produce a query equivalent to the following SQL:

select COF_NAME from COFFEES where PRICE < 10.0

Developers benefit from the type-safety and composability of FRM as well as being able to reuse the typical Scala collection APIs like filter, map, flatMap, etc. Learn more about Slick's Scala-based query API in the "Hello Slick" Activator template.

Sometimes you may still need to write your own SQL code for an operation which is not well supported at a higher level of abstraction. Instead of falling back to the low level of JDBC, you can use Slick’s Plain SQL queries with a much nicer Scala-based API that integrates nicely into your asynchronous application. This template will get you started with the Plain SQL API using a working application.

Run the App

Slick is a library that is easy to include in any project. This project uses the sbt build tool so the dependency for Slick is specified in the build.sbt file. To make things simple this project uses an embedded H2 in-memory database.

Learn more about connecting to databases in the Slick docs.

This template includes a simple Scala application, PlainSQL.scala, which is composed of individual traits demonstrating different aspects of Plain SQL queries in Slick. This application automatically runs when Activator is started and then re-runs after every successful compile. You can see the output in Run.

Database Connection

Database connections are usually configured via Typesafe Config in your application.conf, which is also used by Play and Akka for their configuration:

h2mem1 = {
  url = "jdbc:h2:mem:test1"
  driver = org.h2.Driver
  connectionPool = disabled
  keepAliveConnection = true

The default connection pool is HikariCP. Since a connection pool is not necessary for an embedded H2 database, we disable it here. When you use a real, external database server, the connection pool provides improved performance and resilience. The keepAliveConnection option (which is only available without a connection pool) keeps an extra connection open for the lifetime of the Database object in the application. It is useful for managing the lifecycle of named in-memory databases which keep their data as long as there are still open connections.

In the body of PlainSQL.scala we create a Database object from the configuration. This causes a thread pool (and usually also a connection pool) to be created in the background. You should always close the Database object at the end to release these resources. PlainSQL.scala is a standalone command-line application, not running inside of a container which takes care of resource management, so we have to do it ourselves. Since all database calls in Slick are asynchronous, we are going to produce a Future in the app, but eventually we have to wait for the result before exiting, which is achieved by the following scaffolding:

val db = Database.forConfig("h2mem1")
try {
  val f: Future[_] = {
    // body of the application
  Await.result(f, Duration.Inf)
} finally db.close

If you are not familiar with asynchronous, Future-based programming in Scala, you can learn more about Futures and Promises in the Scala documentation.

String Interpolation

All Plain SQL queries in Slick are built via string interpolation using the sql, sqlu and tsql interpolators. They are available through the standard api._ import from a Slick driver. We use these interpolators in Interpolation.scala to build all queries for the app.

You can see the simplest use case in createCoffees, createSuppliers and insertSuppliers where the sqlu interpolator is used with a literal SQL string:

def createCoffees: DBIO[Int] =
  sqlu"""create table coffees(
    name varchar not null,
    sup_id int not null,
    price double not null,
    sales int not null,
    total int not null,
    foreign key(sup_id) references suppliers(id))"""

The sqlu interpolator is used for DML statements which produce a row count instead of a result set. Therefore they are of type DBIO[Int].

Any variable or expression injected into a query gets turned into a bind variable in the resulting query string. It is not inserted directly into a query string, so there is no danger of SQL injection attacks. You can see this used in insertCoffees:

def insert(c: Coffee): DBIO[Int] =
  sqlu"insert into coffees values (${}, ${c.supID}, ${c.price}, ${c.sales}, ${})"

The SQL statement produced by this method is always the same:

insert into coffees values (?, ?, ?, ?, ?)

Note the use of the DBIO.sequence combinator in this method:

val combined: DBIO[Seq[Int]] = DBIO.sequence(inserts)

Unlike the simpler DBIO.seq combinator which runs a (varargs) sequence of database I/O actions in the given order and discards the return values, DBIO.sequence turns a Seq[DBIO[T]] into a DBIO[Seq[T]], thus preserving the results of all individual actions. It is used here to sum up the affected row counts of all inserts.

Result Sets

In Interpolation.scala the method namesByPrice uses tbe sql interpolator which returns a result set produced by a statement. The interpolator by itself does not return a DBIO value. It needs to be followed by a call to as to define the row type:

    from coffees c, suppliers s
    where c.price < $price and = c.sup_id""".as[(String, String)]

This results in a DBIO[Seq[(String, String)]]. The call to as takes an implicit GetResult parameter extracts data of the requested type from a result set. There are predefined GetResult implicits for the standard JDBC types, for Options of those (to represent nullable columns) and for tuples of types which have a GetResult. For non-standard return types you have to define your own converters.

You can see this in Transfer.scala where we define two data transfer classes, Coffee and Supplier. In order to use these classes for returning data from a query, we also define two implicit GetResult values, getSupplierResult and getCoffeeResult.

GetResult[T] is simply a wrapper for a function PositionedResult => T. The implicit val for Supplier uses the explicit PositionedResult methods getInt and getString to read the next Int or String value in the current row. The second one uses the shortcut method << which returns a value of whatever type is expected at this place. (Of course you can only use it when the type is actually known like in this constructor call.)

Splicing Literal Values

While most parameters should be inserted into SQL statements as bind variables, sometimes you need to splice literal values directly into the statement, for example to abstract over table names or to run dynamically generated SQL code. You can use #$ instead of $ in all interpolators for this purpose. In Interpolation.scala you can see this in action in the method coffeeByName:

val table = "coffees"
sql"select * from #$table where name = $name".as[Coffee]

Type-Checked SQL Statements

The interpolators you have seen so far only construct a SQL statement at runtime. This provides a safe and easy way of building statements but they are still just embedded strings. If you have a syntax error in a statement or the types don't match up between the database and your Scala code, this cannot be detected at compile-time. You can use the tsql interpolator instead of sql to get just that. We use it in a separate app, TypedSQL.scala. You can run it by switching to TypedSQL in Run.

In order to give the compiler access to the database, you have to provide a configuration that can be resolved at compile-time. This is done with the StaticDatabaseConfig annotation:


It takes a URI which points to the path "tsql" in our application.conf. Unlike the "h2mem1" configuration that we have used before, "tsql" is a configuration for a DatabaseConfig object, not just a Database object. A DatabaseConfig consists of a Slick driver and a matching Database:

tsql {
  driver = "slick.driver.H2Driver$"
  db {
    connectionPool = disabled
    driver = "org.h2.Driver"
    url = "jdbc:h2:mem:tsql1;INIT=runscript from 'src/main/resources/create-schema.sql'"

In the database URL we run an init script on an empty in-memory database to set up the database schema. That way we can keep the schema code in our project (in create-schema.sql) and perform the compile-time checking without requiring access to an external database server.

In TypedSQL.scala we also retrieve the statically configured database for use at runtime:

val dc = DatabaseConfig.forAnnotation[JdbcProfile]
import dc.driver.api._
val db = dc.db

This gives us the Slick driver for the standard api._ import and the Database. Note that it is not mandatory to use the same configuration. You can get a Slick driver and Database at runtime in any other way you like and only use the StaticDatabaseConfig for compile-time checking.

The tsql interpolator is used in the getSuppliers method:

tsql"select * from suppliers where id > $id"

Try changing the declared return type of getSuppliers or introducing a syntax error in the statement. You will get a compiler error in the Compile tab.

Next Steps

Check out the full Slick manual and API docs.

You can also find more Slick templates, contributed by both, the Slick team and the community, here in Activator.