PIKES works out-of-the-box on GNU/Linux machines (tested on Debian, Ubuntu and Red Hat). It works also on Mac OS X, but the UKB module (word sense disambiguation) should be installed separately (see below).

The software needs Java 1.8 and at least 8GB of RAM (better 12G) for the models.

We provide a single full package, containing all modules, models, and configurations needed to run PIKES straightaway. The package includes:

  • PIKES Java library which includes, among other Java libraries and models from:

  • WordNet 3.0 - a large lexical database of English developed at Princeton University. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. (More info) (License)

  • UKB - a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing knowledge base. (More info) (Source code) (License)

Execute the following commands on a Bash shell to download and extract PIKES full package:

wget # Download the full package
tar xzf pikes-all.tar.gz
cd pikes

Running PIKES on GNU/Linux (Interactive Mode)

If you want to run PIKES on GNU/Linux in Interactive Mode, just execute the script. After a minute, PIKES should be active on port 8011 (you can change the port modifying the PORT variable in the script file).

Various API methods are available:


Given some text, this returns a NAF file containing the linguistic annotations produced by the annotators used in PIKES. Call example (server is the name of the server hosting PIKES - e.g. localhost):



Given some text, this returns the RDF content (in TriG format) extracted by PIKES from the given text. Call example:



PIKES comes with a web interface (such as the web demo available on the PIKES web site); you need graphviz to be installed on the server to run it. With Debian/Ubuntu, just run apt-get install graphviz and restart PIKES. The demo interface (with input textbox for text) is written in php and available under the src/webdemo/ folder in the project. To access it, just surf to


PIKES execution and processing (e.g., which annotators are used) is configurable via a property file. See e.g. the config-pikes.prop file included in the PIKES package. If you want to pass Stanford CoreNLP configurations, just prepend stanford. to the name of the preference (see e.g. stanford.dcoref.maxdist in the provided configuration file). To select the annotators to be applied, just change the value of stanford.annotators. For example, with stanford.annotators = tokenize, ssplit you’ll have only tokenizer and sentence splitter.

The maximum length of text (in characters) processed by PIKES is limited by the max_text_len property in the configuration file. Set this value according to you needs.

If you experience problems or very slow performances, try increasing the RAM value in the script.

Running PIKES on GNU/Linux (Batch Processing Mode)

PIKES can efficiently (parallel) process large quantities of files, in the so-called Batch Processing Mode.

STEP 0 - Creating the input NAF files

The documents to be processed by PIKES have to be provided as “input NAF” files. An “input NAF” file is a NAF file which contains the text to be processed within the <raw> tags, and with a minimal set of header attributes.

Example of minimal input NAF file:

<?xml version="1.0" encoding="UTF-8"?>
<NAF xml:lang="en" version="v3">
    <fileDesc title="Juan Bautista de Anza and the Route to San Francisco Bay" />
    <public publicId="d331" uri="" />
  <raw><![CDATA[Juan Bautista de Anza and the Route to San Francisco Bay.
Juan Bautista de Anza, from a portrait in oil by Fray Orsi in 1774.  On March 28, 1776, Basque New-Spanish...]]></raw>

To automatically build the input NAF files for your document collection, have a look at some of the converters we have developed. A generic java code for converting plain text files to input NAFs can be found here.

STEP 1 - Processing the input NAF files with PIKES

To process the input NAF files with PIKES you can adapt the script provided in the PIKES full package:


java $RAM -cp $CLASSPATH eu.fbk.dkm.pikes.tintop.FolderOrchestrator -c $CONF -i $INPUT_FOLDER -o $OUTPUT_FOLDER -s $PARALLEL_INSTANCES -z $MAXLENGTH

Adapt the script with the actual (full path) containing the input NAF files (INPUT_FOLDER) and where you want the processed files to be placed (OUTPUT_FOLDER). Input NAF files may be organized in subfolders, and PIKES will preserve in the the output folder the subfolder structure. PARALLEL_INSTANCES tells PIKES how many instances to be run in parallel. MAXLENGTH tells PIKES to skip (i.e. not process) files with more than MAXLENGTH characters (very large files may take long time to be processed). RAM tells pikes how much memory to use (you may have to increase it, based on the size of the files to be processed).

PIKES is configured to process only those input NAF files for which no corresponding processed NAF file is available in the OUTPUT_FOLDER. That is, re-running on the same INPUT_FOLDER will not overwrite any file already in the OUTPUT_FOLDER. At the same time, if processing of the INPUT_FOLDER stops for any reason, by re-running PIKES will process only the not-yet processed input NAF files.

You can test the the processing in this step with the input NAF files available here.

STEP 2 - Generating the RDF file(s)

To generate the RDF content from the processed NAF files you can adapt the script provided in the PIKES full package:


java -cp $CLASSPATH eu.fbk.dkm.pikes.rdf.Main rdfgen $NAF -o $OUTPUT_FILE -n -m -r

Adapt the script with the actual (full path) containing the input NAF files (NAF) and where you want the RDF files to be placed (RDF). MERGED defines the name of the file containing the RDF content from all the NAFs considered (unless the -i flag is used). It is also used to decide, via the file extension provided, the RDF serialization used (e.g., tql in the example). The flags at the end of the java call affect the behaviour of PIKES and the generated RDF content:

  • [-r,--recursive] : convert also files recursively nested in specified directories
  • [-i,--intermediate] : produce single RDF files (one for each input NAF) instead of a single output file; output path and format are extracted from MERGED
  • [-m,--merge] : merge instances (smushing plus filtering of group instances)
  • [-n,--normalize] : normalize/compact output so to use less metadata statements

You can test the the generation of RDF content with the processed NAF files available here.

Running PIKES on a Mac OS X

To execute PIKES on a Mac OS X machine, you need to recompile UKB, that needs boost version 1.44 or higher. If you have Homebrew installed, just run brew install boost, otherwise you need to download and compile boost.

Then run:

git clone
cd ukb/src/

Finally, copy compile_kb, convert2.0, ukb_ppv and ukb_wsd to the ukb/ folder in the running directory.

During the ./configure command, you may need to specify where boost has been installed using the --with-boost-include parameter. If you used Homebrew, you should add --with-boost-include=/usr/local/Cellar/boost/1.63.0/include (replace 1.63.0 with the version you installed).

Then, follow the instructions in Running PIKES on GNU/Linux (Interactive Mode) or Running PIKES on GNU/Linux (Batch Processing Mode) for using PIKES.

Recompiling PIKES from sources

If you want to generate the PIKES Java Library from sources, just execute:

git clone
cd utils
mvn clean install
cd ..
git clone
cd fcw
git checkout develop
mvn clean install
cd ..
git clone
cd tint
git checkout develop
mvn clean install
cd ..
git clone
cd pikes
git checkout develop
mvn clean package -DskipTests -Prelease

You’ll get the pikes-tintop-1.0-SNAPSHOT-jar-with-dependencies.jar package into the pikes-tintop/target/ folder. Just copy it to the running folder and restart PIKES.

Back to top

Last Published: 2022/02/04.

Reflow Maven skin by Andrius Velykis.

Data and Knowledge Management tools