Temporal Dimensional Modeling

Back in 2012 we introduced a way to “fix” the issues with Slowly Changing Dimensions in Dimensional Modeling. That script was actually sent to Ralph Kimball, but seeing as not much has happened in the following seven years, we decided to become a bit more formal and write a paper about it. It is entitled “Temporal Dimensional Modeling” and can be read on ResearchGate. Even if you are not interested in Dimensional Modeling (disclaimer: we are not either), you can learn a lot about temporality and improper ways to manage it, through the existing SCD types. Of particular general interest is the twine, a clever way to find historically correct relationships, given information that is stored differently.

Here is the abstract from the paper:

One of the prevalent techniques for modeling data warehouses is and has for the last decade been dimensional modeling. As initially defined, it had no constructs for keeping a record of changes and only provided the as-is latest view of available information. Since its introduction and from increasing requirements to record changes, different approaches have been suggested to manage change, mainly in the form of slowly changing dimensions of various types. This paper will show that every existing type of slowly changing dimension may lead to undesired anomalies, either at read or at write time, making them unsuitable for application in performance critical or near real-time data warehouses. Instead, based on current research in temporal database modeling, we introduce temporal dimensions that make facts and dimensions temporally independent, and therefore suffer from none of said anomalies. In our research, we also discovered the twine, a new concept that may significantly improve performance when loading dimensions. Code samples, along with query results showing the positive impact of implementing temporal dimensions compared to slowly changing dimensions are also presented.

The experiments in which performance was measured was done using Transact-SQL in Microsoft SQL Server. The code for reproducing the tests is available on GitHub.

Modeling Consensus and Disagreement

If you didn’t know it before, let me tell you that consensus is a big thing in Sweden. Looking in a dictionary, consensus is defined as “agreement among all the people involved” and it is rare for Swedes to leave a meeting room before it has been reached. Honestly, it’s to the point where meetings can become very tedious, but perhaps the inefficiency of the meeting itself is weighed up by the benefits of having consensus when leaving the room. I think the jury is still out on that one though…

When it comes to databases, there is an unspoken understanding that there is consensus among those who want to retrieve information from it. There is consensus on what the information represents, what it means, and which constraints are imposed upon it. But, can we be sure that is the case? Wouldn’t it be nice if we could write down a timeline where we could prove there are intervals of consensus interspersed by intervals of disagreement? How great would it not be if this was possible to deduce from the information itself?

This is where transitional modeling comes to the rescue. Let’s dig deeper into its two constructs, the posit and the assertion, which enable the modeling of consensus and disagreement. First, this is the structure of a posit:

[{(id1, role1), … ,(idN, roleN)}, value, time]

Every idi is a unique identifier, meaning that it uniquely represents a thing in whatever it is we are modeling. Once a thing has been assigned an id, it belongs to that thing alone for all eternity. In a posit, it is possible that idi = id but the roles must be unique. The value is either a primitive value or an instance of some complex data type. The time is either a primitive or a fuzzy value. Let’s clarify this by looking at some examples:

[{(Arthur, beard color)}, red, 1972-1974]
[{(Arthur, address)}, {street: AStreet, zip code: 1A2B3C, …}, 1972]
[{(Arthur, husband), (Bella, wife)}, <married>, 1999-09-21]

Posits do not differentiate between properties and relationships. They both share the same structure, but properties are easy to recognise since they only have a single role and id. The interval 1972–1974 in the second posit means that the information is imprecise, and expresses that since sometime within that interval Arthur grew a red beard, not that his beard was red between those years. If the color of the beard changes, a different posit would express this, along with the time the change occured. As can be seen, the address is a complex data type in the form of a structure. In the marriage relationship the angled brackets around the value <married> indicates that it is a complex data type, which in this example is a value from an enumeration.

The data types were picked specifically so that some parallels to traditional database modeling techniques can be drawn. That beard color is the name of an attribute in Anchor modeling, possibly on a Person anchor, of which Arthur is an instance having a surrogate key as the unique identifier. That address is a satellite in Data Vault, since the value structure is basically a bunch of attributes, possibly on a Person hub, where Arthur may have some hash or concatenation as its unique identifier. The marriage relationship is a knotted tie in Anchor modeling, where the knot constrains the possible values to those found in the enumeration, or a link in Data Vault, connecting Persons to each other.

Posits are neither true nor false. They just are. This is where the magic happens though. First, in order to be able to reference the posits, let’s give them some names. In practice these could be their memory addresses in an in-memory structure or some identity column in a database table.

p1 = [{(Arthur, beard color)}, red, 1972-1974]
p2 = [{(Arthur, address)}, {street: AStreet, zip code: 1A2B3C, …}, 1972]
p3 = [{(Arthur, husband), (Bella, wife)}, <married>, 1999-09-21]

Now we introduce the positor as someone or something that can have opinions about posits. It could be a human, it could be a machine, or it could be something different, as long as it can produce assertions. An assertion has the following structure:

!(id, posit, reliability, time)

The id is a unique identifier for the positor who has an opinion about a posit. Positors are part of the same body of information and may be described by posits themselves. The reliability expresses with what certainty the positor believes the posit to be true and is a real value between -1 and 1. Finally, the time is when the positor is making the assertion. As it turns out, reliabilities have some interesting properties, such as it being symmetric and can be used to express opposite beliefs. Somewhat sloppily expressed, negative reliabilities correspond to putting a not in front of the value. For example:

!(Me, “beard is red”, -1, Now) = !(Me, “beard is not red”, 1, Now)

This comes in handy, since storing the complement of a value is often unsupported in information storages, such as databases. If another assertion is added for the same posit but by a different positor, it can express a consensus or a disagreement.

!(Me, p2, 1, Now)
!(You, p2, 1, Now)
!(Me, p3, 0.75, Now)
!(You, p3, 0.5, Now)

So it seems both Me and You are in complete agreement on the address of Arthur and we can declare consensus. However, Me thinks that there is a 75% chance that the marriage posit is true, while You only believe that the chance is 50%. Here some additional guidelines are needed in order to determine if this means consensus or not. At least it can be determined that we are not in complete agreement any longer.

You are, due to the symmetrical nature of reliabilities, stating that there is a 50% chance that Archie and Bella are not married. It is easy to be mislead into believing that this means it could be any value whatsoever, but that is not the case. You are stating that it is either “married” or “not married”, with equal probabilties, but the enumeration could contain an arbitrary number of values, each of which would be a valid case for “not married”, making “married” more likely than the rest. This is not the same as the following assertion:

!(Else, p3, 0, Now)

Here Else is stating that it has no clue whatsoever what the value may be. This is useful, as it makes it possible to retract statements. Let’s say that Else also asserted that:

!(Else, p3, 1, Before)

It seems there was a time before, when Else was certain that the marriage took place as posited. Now Else has changed its mind though. This means that Now there is no consensus, if we look at Me, You, and Else, but Before there was. The full treatment of posits and assertions is available in our latest paper, entitled “Modeling Conflicting, Uncertain, and Varying Information”. It can be read and downloaded from ResearchGate or from the Anchor Modeling homepage. There you can find how You can refrain from being contradictory, for example.

 

The Illusion of a Fact

It is funny how limitations, when they have been around for a while, can be turned into beliefs that there is only one way of thinking. The right way. This is the case of databases and the information they store. We have been staring at the limitations of what databases can store for so long that we have started to think that the world works in the same way. Today I want to put such a misconception to rest. Databases store facts, so naturally we look for facts everywhere, but the truth is, in the real world there are very few facts.

The definition of a fact is “a piece of true information” and “things that are true or that really happened, rather than things that are imaginary or not true” according to the MacMillian dictionary. Let me then ask you, what do you know to be true? It is a fact that “the area of a square with the side x is x squared”, however, limited to squares on a euclidean plane. Mathematics, as it turns out, is one of the few disciplines in which we actually can talk about truth. This is not the case for science in general though.

Is the statement “There are no aliens on the dark side of the moon” a fact? You would, if asked if there are aliens on the dark side of the moon, probably answer that there are no aliens there. However, if you were pushed to prove it, you may think otherwise. The reasoning would be that there is this extremely miniscule chance there could be something alien there. We could leave it at that and disqualify the statement as a fact, but let’s not just yet. What is more interesting is why you are almost sure it is a fact.

Back in days of the ancient greeks, Aristarchus of Samos suggested that the Earth revolves around the Sun. Heliocentrism was then forgotten for a while, but brought back by the brave Galileo Galilei almost 2000 years later. You have to rely on these guys being right to begin with, and that the moon is not painted on the sky or made of cheese. Then, you have to rely on the Apollo 8 mission, in which astronauts actually observed the dark side. The photographs that were taken further imply that you rely on the science behind imagery and that any images have not been tampered with. You need to rely on that aliens do not have cloaking devices, or that aliens in generals seem unlikely, and that any claimed observations are not made by credible sources.

You can build a tree view of all the things you rely on in order to feel assured that there are no aliens on the dark side of the moon. I just need to put one of them in doubt for the fact to become a non-fact. This illustrates how fragile facts are, and that they therefore constitute a small small small minority of the information we manage on a daily basis. Yet, for the most part we continue to treat all of it as facts.

For this reason, in transitional modeling, the concept of a fact is omitted, and replaced by a posit. A posit has no truth value at all and is merely a syntactical construct. Assuming “There are no aliens on the dark side of the moon” is a posit just means that it is a statement that fits a certain syntax. In order for such a statement to gain meaning and some kind of truth value, someone called an asserter must have an opinion about it. A second construct, the assertion, semantically binds a posit to an asserter, and expresses the degree of certainty with which the asserter believes the statement to be true or not true. Together they express things like ‘Peter the asserter is almost completely sure that “There are no aliens on the dark side of the moon”‘. Concurrently it may also be the case that ‘Paulina the other asserter thinks there is a slight chance that “There actually are aliens on the dark side of the moon”.

Information, is in this view factless and instead has two parts, the pieces of information (posits) and the opinions about the pieces (assertions), better representing its true nature. That two such simple constructs can lead to a rich theory, from which other modeling techniques can be derived as special cases, such as Anchor modeling, Data Vault, and the third normal form, may be a bit surprising. Read more about it in our latest scientific paper, entitled “Modeling Conflicting, Uncertain, and Varying Information”. It can be read and downloaded from ResearchGate or from the Anchor Modeling homepage.

Schema by Design

Lately, there’s been a lot of talk about when a schema should be applied to your data. This has led to a division of databases into two camps, those that do schema on write and those that do schema on read. The former is the more traditional, with relational databases as the main proponent, in which data has to be integrated into a determined schema before it can be written. The latter is the new challenger, driven by NoSQL solutions, in which data is stored more or less exactly as it arrives. In all honesty, both are pretty poor choices.

Schema on write imposes too much structure too early, which results in information loss during the process of molding it into a shape that fits the model. Schema on read, on the other hand, is so relaxed in letting the inquiring part make sense of the information that understandability is lost. Wouldn’t it be great if there was a way to keep all information and at the same time impose a schema that makes it understandable? In fact, now there is way, thanks to the latest research in information modeling and the transitional modeling technique.

Transitional modeling takes a middle road between schema on read and schema on write that I would like to call schema by design. It imposes the theoretical minimum of structure at write time, from which large parts of a schema can be derived. It is then up to modelers, which may even disagree on classifications, to provide enough auxilliary information that it can be understood what the model represents. This “metainformation” becomes a part of the same body of information it describes, and abides by the same rules with the same minimum of structure.

But why stop there? As it turns out, types and identifiers can be described in the same way. They may be disagreed upon, be uncertain, or vary over time, just like information in general, so of course all that can be recorded. In transitional modeling you can go back to any point in time and answer an inquiry as it would have been answered then and from the point of view of anyone who had an opinion at the time. Actually, it does not even stop there, since constraints over the information, like cardinalities, also are respresented in the same way. It all follows the same minimum of structure.

What then is this miraculous structure? Well, it relies on two constructs only, called posits and assertions, both which are given proper treatment in our latest scientific paper, entitled “Modeling Conflicting, Unreliable, and Varying Information”. It can be read and downloaded from ResearchGate or from the Anchor Modeling homepage. If you have an interest in information modeling, and what the future holds, give it an hour. Trust me, it will be well spent…

Transitional Modeling

Our latest paper is now available, entitled “Modeling Conflicting, Unreliable, and Varying Information”, in which Transitional Modeling is formalized. It can either be viewed and referenced on ResearchGate or downloaded directly from here. Much of what is found in the paper has been part of our courses since we began certifications, and is also available in the online course, but new research from the last couple of years has also been added.

ABSTRACT
Most persistent memories in which bodies of information are stored can only provide a view of that information as it currently is, from a single point of view, and with no respect to its reliability. This is a poor reflection of reality, because information changes over time, may have many and possibly disagreeing origins, and is far from often certain. Hereat, this paper introduces a modeling technique that manages conflicting, unreliable, and varying information. In order to do so, the concept of a “single version of the truth” must be abandoned and replaced by an equivocal theory that respects the genuine nature of information. Through such, information can be seen from different and concurrent perspectives, where each statement has been given a reliability ranging from being certain of its truth to being certain of its opposite, and when that reliability or the information itself varies over time, changes are managed non-destructively, making it possible to retrieve everything as it was at any given point in time. As a result, other techniques are, among them third normal form, anchor modeling, and data vault, contained as special cases of the henceforth entitled transitional modeling.

We hope you all will have fun with transitional modeling, as our research continues, particularly with respect to how it should fit into a database, relational or not.

On the hashing of keys

In Anchor we follow the established paradigm that an instance in the domain we are modeling should only be represented once in the database. For this reason, the surrogate keys we use as identities of such instances need to be dumb in the sense that they neither by themselves can convey any meaning nor be an encoding of anything that has meaning. We prefer sequences, as they are small with respect to size, cheap with respect to the generation of identities, monotonically increasing, and reasonably hard to confuse with something that carries meaning.

As a discouraging example, let’s assume we would like to hash the natural key of a citizen of Sweden using MD5 and use it as identities in our database. First, every citizen is identified by a personal number, on the form:

YYMMDD±NNNC

The birth of date is followed by a delimiter and a four digits, where the first three constitute a three digit serial number, followed by a final check digit. The serial number is even for men and odd for women. The delimiter is a minus sign if you are younger than 100 years old and a plus sign once you get older than that. In other words, not an entirely stable key over time. To complicate things even further, foreigners visiting the country may be given a coordination number which looks exactly like a personal number, except with DD+60. In any situation in which you need to provide your personal number but cannot do so, you may also be given a reserve number. The way to create a reserve number is that it should retain a correct birth date but contain at least one letter instead of a digit in the NNNC part.

As many drawbacks as this system may have, it is a fact of life that every data warehouse in Sweden has to cope with. For example, the MD5 of someone staying in the country for a longer period of time with coordination number 890162-3286 is ee0425783590ecb46030e24d806a5cd6. This can be stored as 128 bits, whereas an integer sequence using 32 bits will suffice for the population of Sweden with a healthy margin. Let’s also assume that we have a tie representing the relationship between married people. As there is a risk of divorce, such a tie must be knotted and historized. If we are purists, the key in the knot should also be a hash, even if it could be represented using a single bit with 1 for married and 0 for divorced. The keys in the hashed tie will consume 128 + 128 + 128 = 384 bits, whereas the sequenced tie will consume 32 + 32 + 1 = 65 bits. Caeteris paribus, the hashed tie is almost six times larger. The issue is further accentuated if you look at a plain text representation of the data. A sequence from 1 to 10 million will use less than 70 million characters in plain text, whereas the 10 million hashes will use no less than 1.2 billion characters. In a situation where the license cost is based on a textual representation of raw data, the hashes would be almost 20 times as expensive.

To continue the example, after some period of time, 890162-3286 gets a citizenship and becomes 890102-3286. The MD5 changes as well and is now 049fda914afa455fae66115acd78b616, completely different than before. Preceding the citizenship this person got married, so there are now rows in the tie with the wrong hash. In the sequenced tie we would expect to find the personal number as a historized attribute of the adjoined anchor. No problem here, two personal numbers now refer to the same sequence number, but at different time periods. The resolution with the hashed keys would be to introduce another instance in the anchor with the new hash together with an additional tie indicating that these two instances actually refer to the same thing in real life. Any query must now take this into account, slowing all queries down, and data must be duplicated, resulting in increased maintenance costs and degraded performance. Of course, if you are not interested in keeping a history of changes you can just update the existing rows, but this is a costly operation and may be quite troublesome if foreign keys are declared. We also presumed that such a history is a ‘must have’ requirement, as in any proper data warehouse.

The conclusion is that hashing is a poor choice when keys may change over time, due to the fact that the hash, while looking like a meaningless string, actually still carry the remnants of the meaning of the input. This is sufficient for causing trouble! We also saw that the size of “safe” hashes are significantly larger than integer sequences. By “safe” we assume that the hash is complex enough for clashes to be extremely rare. However miniscule the risk is of a collision may be it could still be a completely unacceptable event, should it occur. See Black Swan Theory for more information. The “safeness” of the hash is also proportional to the cost of generating it, so the safer you want to be, the more CPU cycles are used in order to try to reassure you.

Polymorphic Graph Queries in SQL Server

Sometimes I get the question of when Anchor Modeling is not suitable. “Actually, most of the times it is”, is my common answer. However, there are times when the requirements are such that you need a bit of trickery on top of a model. One such case recently emerged at a client. The dilemma was how to ask polymorphic graph queries in SQL Server, when you have a network represented as a parent-child-relationship in your Anchor model? First, a polymorphic graph query is one in which you want to find nodes of certain properties connected through any number of edges in your network. For example, “find all computers that at any point have a wireless connection between them”. You may think that the new graph table types in SQL Server 2017 would solve this, but alas, they do not support these types of queries (yet).

Fortunately, in and since SQL Server 2008, an often overlooked new data type was introduced: HIERARCHYID. At first glance it looks disappointing, but it turns out that by using string searches and manipulation, polymorphic queries can be asked. Below is an example that shows how this is done, and should of course be applicable for any type of network, and not just the ones containing computers, switches, routers, cables and wireless connections. As a small bonus, a hint is also given of how to solve the traveling salesman problem.

If you want a general introduction to hierarchical data in databases I can recommend the guide “Hierarchical Data in SQL” by Ben Brumm.

Poor Man’s Auditability

One common question in Anchor Modeling is what to do with values that suddenly disappear. Our answer has been that values do not disappear, but may become unknown, and if such occurances are of interest then this fact should be modeled explicitly. The exception is when corrections are made, and if you need to keep track of them the solution is to use concurrent-reliance-temporal modeling. However, there may be scenarios in which you only want to keep some simple auditability of corrections (or some bastardization of disappearing values) while still remaining in unitemporal modeling. This is now possible in the test version of the online Anchor Modeler, thanks to the addition of deletability.

In order not to break compatibility with existing code, new columns have been added in the latest views for deletable attributes. It works as follows with the update trigger on the latest view for an attribute marked as deletable:

update lST_Stage
set
   ST_NAM_Stage_Name = null,
   ST_NAM_ChangedAt = '2018-05-04',
   Deletable_ST_NAM = 1
where
   ST_ID = 42;

Values can be deleted only when Deletable_ST_NAM is set to 1 in an update. Without that row, the update works exactly as before, which is to ignore anything being set to null. The result of the code above is that a new table is created, named ST_NAM_Stage_Name_Deleted, to which all matching rows are moved. Note that this clears out all history. The new table will have one additional column containing the date of the deletion, which is the current date if the attribute is static or no ST_NAM_ChangedAt is present in the update or whatever ST_NAM_ChangedAt is set to if it is present. This should give you a “poor man’s auditability” for disappearing values in unitemporal modeling.

Optimizing BETWEEN joins

Have you ever found yourself needing to do a join on a BETWEEN condition? If so, you have probably also been struggling with performance. The optimizer in SQL Server just cannot seem to produce a good execution plan for BETWEEN. Thanks to a lot of trial and error, we were however able to find a workaround!

This workaround, using a CROSS APPLY together with a SELECT TOP 1 … ORDER BY, should prove useful outside of the Anchor Modeling community as well. The example is a draw consisting of a bunch of floating point numbers, similar to the Fisher’s noncentral hypergeometric distribution, but this works similarly for date ranges.

Performance improvements

The code generated by the test version of the online modeler has had some modifications made in order to achieve better performing insert triggers and perspectives. Currently, these modifications have only been made for the unitemporal models and only affect the historized parts of the model. Thanks to the “Live Query Statistics” in SQL Server, a bottleneck with the scalar-valued functions finding previous and following values in the insert trigger was identified. This code has instead been inlined. When writing new BIML generation for the sisula ETL framework, an apparently faster way to find the latest version was found, using ROW_NUMBER and a “superfluous” GROUP BY to retain table elimination. This has only been tested on SQL Server 2016 so far, but should work since version 2008 and onwards. As a side note, the SSIS packages generated from the BIML are magnitudes faster than using the insert triggers, so we recommend using these for large volume data warehouses or when priming with historical data. Please test and report back, and if no issues are found this will go into the production version as a new release.