Posts

Showing posts from 2014

Profiling the Thorium actor model engine with LTTng UST

Thorium is an actor model engine in C 1999. It uses MPI and Pthreads. The latency (in Thorium) when sending small messages between actors recently came to my attention. In this post, LTTng-UST is used to generate actor message delivery paths annotated with time deltas in each step. Perf I have been working with perf for a while now, but found it only useful mostly for hardware counters. I typically use the following command to record events with perf. Note that ($thread is the Linux LWP (lightweight process) thread number. perf record -g \     -e cache-references,cache-misses,cpu-cycles,ref-cycles,instructions,branch-instructions,branch-misses \     -t $thread -o $thread.perf.data     As far as I know, perf can not trace things like message delivery paths in userspace. Tracing with LTTng-UST   This week, I started to read about tracepoints (perf does support "Tracepoint Events"). In particular, I wanted to use tracepoints to understand some erratic behavior

Profiling an high-performance actor application for metagenomics

Image
I am currently in an improvement phase where I break, build and improve various components of the system. The usual way of doing things is to have a static view of one node among all the nodes inside an actor computation. The graphs look like this: 512x16 1024x16 1536x16 2048x16 But with 2048 nodes, the one single selected node may not be an accurate representation of what is going on. This is why, using Thorium profiles, we are generating 3D graphs instead. They look like this: 512x16 1024x16 1536x16 2048x16

The public datasets from the DOE/JGI Great Prairie Soil Metagenome Grand Challenge

I am working on a couple of very large public metagenomics datasets from the Department of Energy (DOE) Joint Genome Institute (JGI). These datasets were produced in the context of the Grand Challenge program. Professor Janet Jansson was the Principal Investigator for the proposal named Great Prairie Soil Metagenome Grand Challenge ( Proposal ID: 949 ). Professor C. Titus Brown wrote a blog article about this Grand Challenge . Moreover, the Brown research group published at least one paper using these Grand Challenge datasets ( assembly with digital normalization and partitioning ). Professor James Tiedje presented the Great Challenge at the 2012 Metagenomics Workshop. Alex Copeland presented interesting work at Sequencing, Finishing and Analysis in the Future (SFAF) in 2012 related to this Grand Challenge. Jansson 's Grand Challenge included 12 projects . Below I made a list with colors (one color for the sample site and one for t

The Thorium actor engine is operational now, we can start to work on actor applications for metagenomics

I have been very busy during the last months. In particular, I completed my doctorate on April 10th, 2014 and we moved from Canada to the United States on April 15th, 2014. I started a new occupation on April 21st, 2014 at Argonne National Laboratory (a U.S. Department of Energy laboratory). But the biggest change, perhaps, was not one listed in the enumeration above. The biggest change was to stop working on Ray. Ray is built on top of RayPlatform, which in turn uses MPI for the parallelism and distribution. But this approach is not an easy way of devising applications because message passing alone is a very leaky, not self-contained, abstraction. Ray usually works fine, but it has some bugs . The problem with leaky abstractions is that they lack simplicity and are way too complex to scale out. For example, it is hard to add new code to an existing code base without breaking anything. This is the case because MPI only offers a fixed number of ranks. Sure, the MPI standard has s

Is it required to use different priority in a high-performance actor system ?

I was reading a log file from an actor computation. In particular, I was looking at the outcome of a kmer counting computation performed with Argonnite, which runs on top of Thorium. Argonnite is an application in the BIOSAL project and Thorium is the engine of the BIOSAL project (which means that all BIOSAL applications run on top of Thorium). In BIOSAL, everything is an actor or a message. And these are handled by the Thorium engine. Thorium is a distributed engine. A computation with Thorium is distributed across BIOSAL runtime nodes. Each node has 1 pacing thread and 1 bunch of worker threads (for example, with 32 threads, you get 1 pacing thread and 31 workers). Each worker is responsible for a subset of the actors that live inside a given BIOSAL node. Obviously, you want each worker to have their own actors to keep every worker busy. Each worker has a scheduling queue with 4 priorities: max, high, normal, and low (these are the priority used by the Erlang ERTS called BE