Co-locating in an MPP-RDBMS

The idea behind a Massively Parallel Processing Relational Database Management System is that each node that participates in the cluster should be as autonomous as possible. We have earlier shown that using the available distribution techniques in for example HP Vertica, it is possible to co-locate an instance of an anchor and all of its attributes along with any knots they relate to and any ties the instance participates in. Recently, we were asked how to additionally co-locate closely related instances.

In a very simple thought experiment, this is actually not so difficult to do. Imagine a model of restaurants and their menus. It is reasonable that in many cases an instance of a resturant and its related menu-instances would be part of the result set. If we can co-locate these instances on the same node, the node will become even more autonomous. However, using a simple modular hash on the identities of restaurants and menus will distribute them evenly across the nodes, but for the most part not keep related instances together.

Let us look at a possible solution using some pseudo code.

Let 3 be the number of nodes in the cluster.
Let s_resturant be a sequence starting with 1 incremented by 1.
Let s_menu0     be a sequence starting with 3 incremented by 3.
Let s_menu1     be a sequence starting with 1 incremented by 3.
Let s_menu2     be a sequence starting with 2 incremented by 3.
Let {1, 2, 3, 4, 5, 6} be ids of six restaurants.
Let id modulo 3 determine the node on which to locate an instance.
Then restaurants {3, 6} will be co-located on node 0.
Then restaurants {1, 4} will be co-located on node 1.
Then restaurants {2, 5} will be co-located on node 2.
Let s_menu0 generate ids for menus of restaurants on node 0.
Let s_menu1 generate ids for menus of restaurants on node 1.
Let s_menu2 generate ids for menus of restaurants on node 2.
Find the node of restaurant 4 through 4 modulo 3 = 1.
Use s_menu1 to create menus {1, 4, 7, 10} at restaurant 4.
Then menus {1, 4, 7, 10} will be co-located on node 1.

The drawback is that you will need as many sequences as you have nodes and the fact that the number of nodes may not stay the same over time. However, using similar logic it may be able to apply similar thinking to cover the redistribution of instances in the case of an added node in the cluster.

Anchor Modeling Academy

We will bringing some of our courses online, starting with an Introduction to Anchor Modeling. It is a four hour video course for $299 that will give you a general introduction to Anchor, serving as a good start to build from if you are interested in the technique. A link to the course can be found in the sidebar to the left, or you can click here: anchor.teachable.com. This will hopefully be an opportunity for those of you who have shown interest in our courses, but have been unable to travel to Sweden in order to take them. We’re open to suggestions on what you would like to see as a second course that builds upon this one, so let us know!

 

Anchor Modeler 0.99 Released

We are proud to announce the release of version 0.99 of the modeling tool. This version has been in the making for over one year, and code generated from it is, as usual, already in production for a number of data warehouses and system databases. Expect a more user friendly interface, that again works in Chrome, after they unexpectedly and deliberately broke SVG support. The generated code has, among other things, improved trigger logic. For example, if you try to update a static attribute, you will now get a warning, rather than a failure. This should make life easier, particularly for those using our sisula ETL Framework for metadata driven Data Warehouse automation. Using this framework with the latest modeling tool we have built and put a DW in production in record time. It took less than a week to model, populate three years of history, and start a daily incremental load of a Data Warehouse used in a high security environment.

Anchor Model Generator

A while back Juan-José van der Linden created a script that would reverse-engineer a database into an Anchor model. He was kind enough to donate that script to the community, and it is available in our forum. Now there’s also a second effort which is available in the form of the script below. This is a work in progress and it will be updated with more features in the future. Perhaps we can merge the best features from JJ’s script into this one or the other way around.

Please note that the script will use column statistics in order to determine if knots should be created, so it may take a long time to run when no statistics are available. It will reuse existing statistics, so a second run of the script is much faster. It tries to determine ties based on primary keys and matching column names.

The following script can be used to generate knot loading code, based on the data stored in the descriptions of the knots in the model after running the script above.

The following script can be used to generate source to target mappings for use with the sisula ETL framework, based on the data stored in the descriptions of the attributes in the model after running the script above.

24 New Certified Anchor Modelers

Last week we had the pleasure of certifying 24 new Anchor Modelers in the Netherlands. In other words, if you want to find expertise on Anchor modeling, you should head over to our Directory and contact them! We also held a public session with an introduction to Anchor modeling, showing them how it is possible to capture facts like “someone being somewhat sure about something, while someone else is completely sure about its opposite”. We were happy to have participants from Nike, the Dutch Police, Rabobank, Essent, ChainPoint, consultant companies like Free Frogs, Ordina, Cap Gemini, Kadenza, and a number of freelancers.

IMG_1031 IMG_1032

Thanks everyone for participating at the courses and presentations, and the great discussions during the breaks!

34th International Conference on Conceptual Modeling

We had a great time at the 34th International Conference on Conceptual Modeling, also known as ER2015, running a demo station of the online Anchor modeler, standing in for Hans Hultgren and presenting Ensemble Modeling, participating in a panel discussion on Big Data, and presenting our own paper “Big Data Normalization for Massively Parallel Processing Databases” at MoBiD.

Here are some photos of Nikolay Golov presenting our paper. He had a very interesting story to tell of how Anchor has helped Avito scale out their data warehouse as the both the business and requirements grew very rapidly, multiplying the number of sources, rows, and terabytes with a factor of ten over three years.

ER2015_1ER2015_2ER2015_3

Implementing constraints

One common question I am asked is how constraints are implemented in an Anchor modeled database. Temporality make constraints less straightforward than in static tables. With unitemporal history the constraint needs to hold for every possible time slice in the database given every point in changing time (tc). If concurrency and bitemporal history is used, they need to hold for every possible time slice for every positor given every point in bitemporal time (p, tp, tc). The good thing is that a slice behaves just like a static database. It is a snapshot of what the information looked like at that point.

Implementing this using a CHECK constraint would be quite cumbersome. Even if you can limit the points of time that needs to be checked to those actually used in the database they quickly grow to a large number. The better way is to do the checking with an AFTER INSERT trigger. At insert time, it is possible to check only against those slices that are affected by the insert.

Below is a script that generates a unique constraint for the stage name in our example model. Note that time slice checking is done by applying the point-in-time perspective of the affected anchor. This “template” for constraints can of course be extended to include more complex rules, such that two performances cannot be held on the same stage on the same date, found further down in the same script.

Note that in the case of the performance date constraint, we do not have to take time into account, since none of the involved constructs are historized. However, there is one additional important consideration. Since the trigger joins the tie, the tie is expected to have its rows populated before the attributes. In other words, for the trigger to detect duplicates, the loading order must be:

  1. Generate the desired number of identities in the performance anchor.
  2. Populate the tie by connecting these identities to the associated stages.
  3. Populate the attributes on the performance anchor.

If the loading order cannot be guaranteed, a similar constraint must be placed on the tie. In that case, attributes will pass through its trigger if loaded first, since join with the tie “truncates” the result set, but the trigger on the tie will fail because of duplicates. A larger transaction containing the two steps would then still roll back to what the database looked like before the insert.

Anchor Modeler 0.98 Released

We are proud to announce the release of version 0.98 of the modeling tool. This version has been in the making for over one year, and is already used in production for a number of data warehouses and system databases. The graphical interface, while retaining the look and feel, has been completely rewritten in animated SVG, which has improved performance and usability over the old canvas implementation. This step was necessary in order to make it easier to extend the tool graphically. The test version, now at 0.99, will make use of this in order to add support for visualising natural keys. A number of changes have also been made to the triggers, utilising nested triggers in order to reduce the size of a single trigger. Remember to disable triggers before doing inserts directly into attribute tables when you use ETL tools, rather than the triggers on the latest view.

The test version has already been given a number of fixes, primarily to align it with the current research in Anchor modeling. Among these are decisiveness, which controls whether a positor may hold multiple beliefs at a given point in bitemporal time, or if only a single belief is possible (which is the default behaviour).

The Iceberg Demo

We created the Iceberg Exercise in order to demonstrate a business case in which all features of Anchor modeling could be showcased. It has been presented at a number of conferences, but the actual model and example data with queries had not been published online. These are now made publicly available along with an explanatory video tutorial.

The model below captures most of the requirements given in the documentation of an Iceberg Tracking and Drift Prediction System. Icebergs have attached transmitters in order to keep track of their location. Icebergs may split or merge during their lifetime and enters or exits certain geographical areas. Icebergs and transmitters have a number of attributes, some of which may change over time. The model is concurrent-reliance-temporal in order to capture concurrent, but maybe conflicting, views of an iceberg, such as sightings from passing boats.

IcebergModel

After generating the SQL code that creates an implementation of the model, the following script for Microsoft SQL Server, used in the tutorial, can be run in order to create some example data and run some illustrative queries.

The script has been extended to show some other features as well.

Metadata driven Anchor DW Automation

A while ago we created a metadata and SQL driven ELT framework for DW automation, particularly aimed at Anchor modeled data warehouses. This is now in use for three data warehouses and has matured enough for public release. The project is Open Source and available on GitHub (click here) and we have put together a playlist of video tutorials introducing the functionality of the framework. The framework uses the same sisula engine as the online modeling tool for generating the SQL code.