Sometimes it just amazes me what people think is computable given their actual observation space. At times you have to look them in the eye and tell them they are living in fantasyland.
Here is how an example conversation:
Me: “Tell me about your company.”
Customer: “We are in the business of moving things through supply chains.”
Me: “What do you want to achieve with analytics?”
Customer: “We want to find bombs in the supply chain.”
Me: “Tell me about your available observation space.”
Customer: “We have information on the shipper and receiver.”
“We also know the owner of the plane, train, truck, car, etc.”
“And the people who operate these vehicles too.”
Me: “Nice. What else do you have?”
Customer: “We have the manifest – a statement about the contents.”
Me: “Excellent. What else you got?”
Customer: “That’s it.”
"YOU ARE NEVER GONNA FIND A BOMB!”
“NO ONE WRITES ‘BOMB’ ON THE MANIFEST!”
The problem being; often the business objectives (e.g., finding a bomb) are simply not possible given the proposed observation space (data sources).
[Unless, in this case, the perpetrator writes the word “BOMB” on the manifest. And only idiots do that. And luckily we don’t have to worry much about the idiots as they run out of gas on the way to the operation and take wet matches to their fuses.]
When we software engineering folks get overly excited and run off and build systems with little forethought about the balance between the mission objectives and the observation space, there is a risk the system will be a useless piece of crud – so many false positives and false negatives – the value of the system not worth the cost.
As I have no interest in spending intense chunks of my life building pointless systems … when initially scoping a project I recommend first qualifying the available observation space to determine if it is sufficient to deliver on the mission objectives. And if the available observation space is insufficient, then one must first figure out if/how the observation space can be appropriately widened.
In case you are interested, here are the some of the ways I try to mitigate these risks:
Qualifying Observation Spaces
- Ask for real examples from the past of things they would like to detect (opportunity or risk), and then look in the real data to see if, upon human inspection, it is discoverable.
- If real examples from the past cannot be detected in the provided data sources, I tell the them “not even a sentient being could discover this.”
- Have them name their data sources and the data elements (key features).
- Then, just because they say a data source has certain features, go look yourself – I can’t tell you how many times I go take a look and find key columns are empty or so dirty that the value of that data source is negligible.
- If the data sources share common features between them (e.g., customer number, address, phone number, etc.) then generally more is good.
- For those data sources that have no (or few useful) shared features (e.g., one data source has name and address and the other data source has stock symbol and stock price) then generally this is not so good.
Widening Observation Spaces
- There will be many cases where it becomes necessary to help the customer think about widening their observation space … if they are ever to realize their hopes and dreams (business objectives).
- Conjuring up additional data to expand the observation space is quite an art and requires real-world understanding of what and how data flows inside the walls and outside the walls as well the legal and policy ramifications.
- Generally one starts looking for new data sources in this order: 1) other stuff inside the walls that you already collect (e.g., product returns), 2) external data that can be purchased (e.g., marketing flags like “presence of children” and “income indicators” as routinely sold by data aggregators). Of course there are other options like collecting more data themselves (e.g., adding a field to a web page so their customers can express sentiment, capturing the fingerprint of the device during on-line transactions, etc.)
- If you are trying to catch bad guys, hope that some of the data sources would be unknown or non-intuitive to their adversary (if the bad guys know you have cameras on these four streets, then they will take the fifth street).
- Beware of social media: There is much allure to the idea that one can computationally map Twitter statements (about your company/brand) to which customer said it. Go take a look yourself and see how often the Tweet account contains sufficient features to make an accurate identity assertion. I think you will see the frequency in which an identity can be asserted is underwhelming. Different countries and different kinds of social sites will have different statistics. In any case, be wary and look for yourself first.
- Now let’s say one has a list of potentially new data sources to use. Then the next question is how to prioritize all these possibilities. Again, there are a lot of ways to think about this – but here are a few common ways I think about this: A) Data that improves the ability to count or relate entities (e.g., a source that may contains new identifiers like email addresses) so that one can discover that two customers you thought were different are more likely the same customer; B) Data that brings more facts (e.g., what, where, when, how many, how much); C) Diverse data potentially containing identifiers and facts in disagreement (e.g., this fact indicates they are here, but that fact shows they really may be over there – helpful if trying to keep strangers from using your credit card).
- Finally, don’t forget there will be plenty of times that the mission objectives cannot be achieved because the necessary observation space is not available. In which case, punt.
The above list is somewhat off the cuff, certainly incomplete. So … please consider it a starter kit and hack at it any which way you like …