blockchain database, cata metadata query

Related tags


Drill Storage Plugin for IPFS



  1. Introduction
  2. Compile
  3. Install
  4. Configuration
  5. Run


Minerva is a storage plugin of Drill that connects IPFS's decentralized storage and Drill's flexible query engine. Any data file stored on IPFS can be easily accessed from Drill's query interface, just like a file stored on a local disk. Moreover, with Drill's capability of distributed execution, other instances who are also running Minerva can help accelerate the execution: the data stays where it was, and the queries go to the most suitable nodes which stores the data locally and from there the operations can be performed most efficiently.

Slides that explain our ideas and the technical details of Minerva:

A live demo: hosted on a private cluster of Minerva.

Note that it's still in early stages of development and the overall stability and performance is not satisfactory. PRs are very much welcome!



This project depends on forks of the following projects:

Please clone and build these projects locally, or the compiler will complain about unknown symbols when you compile this project.

Compile under the Drill source tree

Clone to the contrib directory in Drill source tree, e.g. contrib/storage-ipfs:

cd drill/contrib/
git clone storage-ipfs

Edit the parent POM of Drill contrib module (contrib/pom.xml), add this plugin under <modules> section:


Build from the root directory of Drill source tree:

mvn -T 2C clean install -DskipTests -Dcheckstyle.skip=true

The jars are in the storage-ipfs/target directory.


The executables and configurations are in distribution/target/apache-drill-1.16.0. Copy the entire directory to somewhere outside the source tree, and name it drill-run e.g., for testing later.

Copy the drill-ipfs-storage-{version}.jar generated jar file to drill-run/jars.

Copy java-api-ipfs-v1.2.2.jar which is IPFS's Java API, along with its dependencies provided as jar files:


to drill-run/jars/3rdparty.

Optionally, copy the configuration override file storage-plugin-override.conf to drill-run/conf, if you want Drill to auto configure and enable IPFS storage plugin at every (re)start.


  1. Set Drill hostname to the IP address of the node to run Drill:

    Edit file conf/ and change the environment variable DRILL_HOST_NAME to the IP address of the node. Use private or global addresses, depending on whether you plan to run it on a cluster or the open Internet.

  2. Configure the IPFS storage plugin:

    If you are not using the configuration override file, you will have to manually configure and enable the plugin.

    Run Drill according to Section Run and go to the webui of Drill (can be found at http://localhost:8047). Under the Storage tab, create a new storage plugin named ipfs and click the Create button.

    Copy and paste the default configuration of the IPFS storage plugin located at storage-ipfs/src/resources/bootstrap-storage-plugins.json:

    ipfs : {
        "host": "",
        "port": 5001,
        "max-nodes-per-leaf": 3,
        "ipfs-timeouts": {
          "find-provider": 4,
          "find-peer-info": 4,
          "fetch-data": 5
        "groupscan-worker-threads": 50,
        "formats": null,
        "enabled": true


    host and port are the host and API port where your IPFS daemon will be listening. Change it so that it matches the configuration of your IPFS instance.

    max-nodes-per-leaf controls how many provider nodes will be considered when the query is being planned. A larger value increases the parallelization width but typically takes longer to find enough providers from DHT resolution. A smaller value does the opposite.

    ipfs-timeouts set the maximum amount of time in seconds for various time consuming operations: find-provider is the time allowed to do DHT queries to find providers, find-peer-info is the time allowed to resolve the network addresses of the providers and fetch-data is the time the actual transmission is allowed to take.

    groupscan-worker-threads limits the number of worker threads when the planner communicate with IPFS daemon to resolve providers and peer info.

    formats specifies the formats of the files. It is unimplemented for now and does nothing.

    Click the Update button after finishing editing. You should see the IPFS storage plugin is registered with Drill and you can enable it with the Enable button.

  3. Configure IPFS

    Start the IPFS daemon first.

    Set a Drill-ready flag to the node:

    ipfs name publish $(\
      ipfs object patch add-link $(ipfs object new) "drill-ready" $(\
        printf "1" | ipfs object patch set-data $(ipfs object new)\

    This flag indicates that an IPFS node is also capable of handling Drill quries and the planner will consider it when scheduling a query to execute distributedly. A node without this flag will be ignored.


Embedded mode

Start IPFS daemon:

ipfs daemon &>/dev/null &

start drill-embedded:


You can now execute queries via the command line as well as the web interface.

As a background service

You can run drill-embedded as a background process without controlling a terminal. This is done with the help of tmux, which is available in many distributions of Linux.

Edit the systemd service file drill-embedded.service, so that the environment variable DRILL_HOME pointes to where Drill is installed:


Copy the service file to systemd's configuration directory, e.g. /usr/lib/systemd/system

cp drill-embedded.service /usr/lib/systemd/system

Reload the systemd daemon:

systemd daemon-reload

Start the service:

systemd start drill-embedded.service
Speedment is a Stream ORM Java Toolkit and Runtime

Java Stream ORM Speedment is an open source Java Stream ORM toolkit and runtime. The toolkit analyzes the metadata of an existing SQL database and aut

Speedment 1.9k Sep 13, 2021
Change data capture for a variety of databases. Please log issues at

Copyright Debezium Authors. Licensed under the Apache License, Version 2.0. The Antlr grammars within the debezium-ddl-parser module are licensed unde

Debezium 5.3k Sep 7, 2021
光 HikariCP・A solid, high-performance, JDBC connection pool at last.

HikariCP It's Faster.Hi·ka·ri [hi·ka·'lē] (Origin: Japanese): light; ray. Fast, simple, reliable. HikariCP is a "zero-overhead" production ready JDBC

Brett Wooldridge 15.7k Sep 14, 2021
The public release repository for SUSTech SQL (CS307) course project 2.

CS307 Spring 2021 Database Project 2 1. Source code Download link: For java: For python: h

null 15 Jun 19, 2021
Mystral (pronounced "Mistral") is an efficient library to deal with relational databases quickly.

Mystral An efficient library to deal with relational databases quickly. A little request: read the Javadoc to understand how these elements work in de

null 13 Aug 30, 2021
MariaDB Embedded in Java JAR

What? MariaDB4j is a Java (!) "launcher" for MariaDB (the "backward compatible, drop-in replacement of the MySQL(R) Database Server", see FAQ and Wiki

Michael Vorburger ⛑️ 625 Sep 15, 2021
requery - modern SQL based query & persistence for Java / Kotlin / Android

A light but powerful object mapping and SQL generator for Java/Kotlin/Android with RxJava and Java 8 support. Easily map to or create databases, perfo

requery 3.1k Sep 9, 2021
SQL made uagliò.

GomorraSQL is an easy and straightforward interpreted SQL dialect that allows you to write simpler and more understandable queries in Neapolitan Langu

Donato Rimenti 844 Sep 14, 2021
Flyway by Redgate • Database Migrations Made Easy.

Flyway by Redgate Database Migrations Made Easy. Evolve your database schema easily and reliably across all your instances. Simple, focused and powerf

Flyway by Boxfuse 6.1k Sep 11, 2021
Flyway by Redgate • Database Migrations Made Easy.

Flyway by Redgate Database Migrations Made Easy. Evolve your database schema easily and reliably across all your instances. Simple, focused and powerf

Flyway by Boxfuse 6.1k Sep 14, 2021
CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time.

About CrateDB is a distributed SQL database that makes it simple to store and analyze massive amounts of machine data in real-time. CrateDB offers the 3.2k Sep 7, 2021
Multi-DBMS SQL Benchmarking Framework via JDBC

BenchBase BenchBase (formerly OLTPBench) is a Multi-DBMS SQL Benchmarking Framework via JDBC. Table of Contents Quickstart Description Usage Guide Con

CMU Database Group 26 Sep 9, 2021
Apache Druid: a high performance real-time analytics database.

Website | Documentation | Developer Mailing List | User Mailing List | Slack | Twitter | Download Apache Druid Druid is a high performance real-time a

The Apache Software Foundation 11.1k Sep 15, 2021
Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (

Trino is a fast distributed SQL query engine for big data analytics. See the User Manual for deployment instructions and end user documentation. Devel

Trino 4.1k Sep 16, 2021
jOOQ is the best way to write SQL in Java

jOOQ's reason for being - compared to JPA Java and SQL have come a long way. SQL is an "ancient", yet established and well-understood technology. Java

jOOQ Object Oriented Querying 4.6k Sep 15, 2021
eXist Native XML Database and Application Platform

eXist-db Native XML Database eXist-db is a high-performance open source native XML database—a NoSQL document database and application platform built e 315 Sep 16, 2021
A tool based on mysql-connector to simplify the use of databases, tables & columns

Description A tool based on mysql-connector to simplify the use of databases, tables & columns. This tool automatically creates the databases & tables

nz 3 Aug 17, 2021
LINQ-style queries for Java 8

JINQ: Easy Database Queries for Java 8 Jinq provides developers an easy and natural way to write database queries in Java. You can treat database data

Ming Iu 621 Aug 31, 2021