Media Influencer

helping people break out of pigeonholes since 2003

It’s the context, stupid

TAGS: None

Doc Searls was asked about the last three paragraphs of this post by Daniel Goleman in connection with VRM.

The singular force that can drive this transformation of every manmade thing for the better is neither government fiat nor the standard tactics of environmentalists, but rather radical transparency in the marketplace. If we as buyers can know the actual ecological impacts of the stuff we buy at the point of purchase, and can compare those impacts to competing products, we can make better choices. The means for such radical transparency has already launched. Software innovations now allow any of us to access a vast database about the hidden harms in whatever we are about to buy, and to do this where it matters most, at the point of purchase. As we stand in the aisle of a store, we can know which brand has the fewest chemicals of concern, or the better carbon footprint. In the Beta version of such software, you click your cell phone’s camera on a product’s bar code, and get an instant readout of how this brand compares to competitors on any of hundreds of environmental, health, or social impacts. In a planned software upgrade, that same comparison would go on automatically with whatever you buy on your credit card, and suggestions for better purchases next time you shop would routinely come your way by email.

Such transparency software converts shopping into a vote, letting us target manufacturing processes and product ingredients we want to avoid, and rewarding smarter alternatives. As enough of us apply these decision rules, market share will shift, giving companies powerful, direct data on what shoppers want — and want to avoid — in their products.

Creating a market force that continually leverages ongoing upgrades throughout the supply chain could open the door to immense business opportunities over the next several decades. We need to reinvent industry, starting with the most basic platforms in industrial chemistry and manufacturing design. And that would change every thing

The article seems to imply that the data is out there in a form or format provided via some centralised source. My immediate reaction was that is not how the social web or the Live Web works: a) data is generated by anyone and everyone and b) it’s messy and the context emergent.

Technology and tools should serve us better and help us, as individuals, to filter and structure that information. Somehow, even in the best case scenario, I don’t see everything on tap from a unified source. Or digested, which is an uncomfortable implication that leaps out of the piece at me.

For example, assessing environmental or health impact of anything is subject to years, decades even, of debate, controversy, lobbying, vested interest, political play… and so it seems to me that the only way I can get information clear enough for making decisions is to ’subscribe’ to a particular view via sources promoting it. Of course, I can get a more balanced take on everything these days by finding alternative views somewhere on the web but I am not sure I want to stand in the supermarket, trying to follow a potentially heated and complicated online debate about the impact of the washing liquid I am about to put in my basket. Can technology speed up and simplify this process to the point where it becomes practical, without losing context for delibration in the process? That is one of the questions I ask myself whenever I come across yet another tool to help us search, compare, aggregate or match information online.

That said, information about nutrients and other non-controversial data of interest to me is easy enough to provide and sadly, this is where most vendors do fall short of what’s possible with existing technology. The operative word here is non-controversial, which is the trojan horse of any implementation of such resource(s). I mean that even what is meant to be gathering of ‘encyclopedic’ knowledge can be controversial at times. Trying to do that with live streams of information means that the checks and balances must reside in the context, not the source itself.

At the more fundamental level, the web and information technology made data cheap. It is the context to data that got expensive, in time and social interactions. On the web the best context costs you time spent browsing and researching and/or time spent cultivating a quality network to supply you with context as you need it. Here I elaborate:

The web has removed physical limitations on space. Data was expensive to create, store and move around and now it is not. This made room for context, which is becoming at least as important as the data. In fact, it is what make data and information the skeleton, giving shape to the flesh and skin but it is no longer the whole body and finish. The important thing is that context can be provided only by a human mind. It cannot be automated – when creating or absorbing it.

Update: The Guardian advert making similar point with regard to media and interpretations of ‘facts’ one sees.

It comes down to whether you prefer context to be provided by:

  1. automated algorithms a la Google and the thousands aggreation sites,
  2. trusted sources including vendors, manufacturers, even third parties and intermediaries, or
  3. your network of friends aka social network

The answer is obvious.

It depends! We use all three at different points in our information gathering, sharing and exchange and transactions. The challenge for VRM is to understand advantages and disadvantages of all three and encourage development of tools that give me, the individual user or customers, the best of all three.

My bet is on no.3. I want to help individuals to capture both data and context on their own terms. This will give rise to another layer of knowledge that serves both the individual and his network. For example, I want to collect data about my shopping, with my own comments and with sources of information useful to me. I want to have pictures of products I have bought, links to reviews by others and my own, comments by friends in my network, record of interactions with the vendors and third parties etc etc. I want it in a place I can further analyse it and share it based on my privacy requirements.

With time, all this can become a source of better understanding of my own behaviour and preferences, and, with practice, a better negotiating position in future transactions. In other words, I will be the most authoritative source of my own history, with data, information and knowledge about me.

And that might change everything.

Young Girl-Old Woman illusion
Young Girl-Old Woman Illusion

Bonus link: TED talk Chris Jones Picturing excess

TAGS: None

9 Responses to “It’s the context, stupid”

  1. Dave Walker
    on Jan 18th, 2009
    @ 18:24 pm

    “In other words, I will be the most authoritative source of my own history, with data, information and knowledge about me.

    And that might change everything.”

    It certainly will – especially for folk who choose not to tell the truth, in such matters.

    Some assertions may need to be verifiable. Before you ask, I haven’t yet figured out how such verification mechanisms might work…

  2. Adriana
    on Jan 18th, 2009
    @ 19:04 pm

    Dave, what kind of verification do you need for reading this blog and drawing your own conclusions or acting on it?!

  3. Craig Overend
    on Jan 18th, 2009
    @ 19:17 pm

    This ties in with something I call ARM or Automated Relationship Management. Whereby sources are automatically aggregated for me based on my context and existing relationships, and managed by my interaction with those sources over time in those contexts. Those sources I then consider fruitful, stay with me in a form of relationship I call an ARM Rank. In that they are automatically recalled in contexts, and visualised dynamically based on my ARM Rank in ways I have found them useful.

    In doing so, in a sense I’m creating a valuation or reputation for a source in a particular context. With the ARM Rank valuation being how many additional ‘arms’ it adds, or how well it augments me.

    I start delving into the world of narrow artifical intelligence here as well as probabilistic logic when it comes to finding the simplest solution to a problem (like your supermarket example). What might be the right answer for you probably won’t be for me. Part of the difficulty is in the algorithm knowing whether you like or dislike something and the interface to feed that knowledge to it. It’s only in having this and ones historical data that I believe we can even begin to think in these directions, so I’m trying not to until I have that foundation. Although; it can be fun to do so!

    ps. Augmented Relationship Management is an alternative name I’ve toyed with and this post reminded me of GoodGuide.

  4. Adriana
    on Jan 18th, 2009
    @ 19:27 pm

    Craig, sadly, I’d consider Automated Relationship Management an oxymoron.

    As I point out in my post, context can meaningfully be processed only by a human mind. Hence the graphic at the end of the post.

  5. The Mine! project » It’s the context, stupid
    on Jan 18th, 2009
    @ 19:44 pm

    [...] bit of context for Mine!: It comes down to whether you prefer context to be provided [...]

  6. Al Tepper
    on Jan 18th, 2009
    @ 21:42 pm

    Think you’re right Adriana, #3 which could include #1 and #2 of course is likely route

    I wonder how VRM might apply to politics as well as commerce: so instead of politicians and parties pulling support, voters would engage using tech to identify their advocates…

    Is jan meet an open meet or does one need an invite :) Would like to attend, lemme know…


  7. Adriana
    on Jan 18th, 2009
    @ 22:08 pm

    Al, the VRM Hub meetings have been going on since last January and the whole point was that they are open to anyone interested in VRM.

    Details here:

  8. Craig Overend
    on Jan 19th, 2009
    @ 3:40 am

    “context can meaningfully be processed only by a human mind”

    It’s a difficult problem, granted. However this relates to my point about the interface. In giving an algorithm a user teaches over time enough information about the context, you should be able to, using the learned algorithms, probabilistically choose what information to include in rotation with existing information. In the image’s case you may give two contexts because the probabilities are so close. Every day scene recognition software gets better the more training data it has and can handle. Numenta or are an excellent example of this.

    Humans infer context of other humans everyday based upon past experience and emotional cues we see in others. We may not get that right every time, but it’s a valuable social tool that aids in relationships and learning. It’s only through time, inference and ranking do we learn who are the experts and who are those to avoid (and you may be wanting to avoid my thinking right now: it gets worse :) . I see no reason we can’t digitally infer based upon enough of these cues, at first in narrow contexts, to suggest information. Sites already do this poorly today with recommendation engines limited to a very narrow subset of knowledge about us.

    Short-term with the interfaces and algorithms we have now I don’t believe a great deal can be inferred to benefit us, but long-term I do see much of this happening.

    Eventually I envisage sensor devices like eye wear that sees what you see, hears what you hear with see-through optical projection to display additional information for any given context. Some people call this mixed-reality, and there are already rudimentary examples of this. All this has to be managed by the users themselves however as to what information is really relevant in context, with the algorithm improving as they interact. The system learning, suggesting and personalising in a never ending cycle.

    Whether it’s a computer with webcam, GPS, etc or head wear that can monitor blood flow of your brain and watch your context, it’s just a matter of how much can be inferred and how well that inference knowledge is then managed.

    These days some new cars even have proximity echo sensors to tell exactly where you are in the car, and the first MIT quantum radar experiment has been performed with the potential to practically image through anything (your brain?): It’s all data that can be used to build an individuals context in order to infer what might happen next and suggest possible outcomes.

    The problem is getting hold of all this valuable context information and then making use of it effectively. Sensors need to be meshed, data needs to be aggregated and persisted, and if this is to be context-based, then real-time processing needs to occur with resources traveling with the user. Second Life, for example, does this when you move from one island to another, moving avatars from server to server.

    A lot needs to be done before any of this will ever have any noticeable impact though, but I do believe it will happen, one day, progressively.

  9. David Spira
    on Jan 19th, 2009
    @ 5:08 am

    Craig, I love technology, but I can’t for the life of me see why I would want an algorithm to provide context for me.

    Sure you can make an argument that it is efficient and you can take in more information, but it seem to me that that would dilute thought, analysis, and what it means to be a human.

    Having a software package analyze and provide context would probably push people further into pigeon holes of thought. The software analyzes what you like/believe and it in turn echos that back to you endlessly until all you see, hear, and read are things that reinforce your own unbending view of the world. Sure the AI could grow and evolve, but only if the person continues to do so.

Leave a Reply

© 2009 Media Influencer. All Rights Reserved.

This blog is powered by Wordpress and Magatheme by Bryan Helmig.