Category: Collective Web

  • Thematic tracks

    Although I have problems with naming it Social software, there are some very interesting ideas floating around these days.

    Roos Mayfield :

    Social Software adapts to its environment, instead of requiring its environment to adapt to software.

    As Shelly Powers points out:

    And we even have SOAP and instant messaging and wireless and other techie tools to make it gadgety enough.

    That reminds me of Howard Rheingold and his recent work on new forms of social networking, smart mobs:

    Smart mobs emerge when communication and computing technologies amplify human talents for cooperation. The impacts of smart mob technology already appear to be both beneficial and destructive, used by some of its earliest adopters to support democracy and by others to coordinate terrorist attacks.

    Roos Mayfield again:

    Social People are smart about how they get their work done. If a software-driven business process fails to serve their activities, they will adapt using their informal network resources to get it done. In other words, when business process fails, business practice takes its place.

    (with reference to John Seely Brown and Paul Duguid’s Social Life of Information).

    All of the above to introduce what I’ve spent the last day of my holiday on, my thematic trackback ping lists:

    What are these for? Basically, they are pages that shows the result of XML-files flowing around between my server and whoever sends their XML-files my way via trackback pings. So, ping me and you news will be added. Since TrackBack is a XML-based framework for peer-to-peer communication and notifications between web sites, and since my trackback lists are thematic, the basic idea is to create a number of thematic news services, thematic tracks. Let’s see how it goes. Start pinging!

    Coders: Get the standalone trackback source code, and read the documentation. Also follow the TrackBack Development Blog. See also Ben’s related comments here and here.

  • LazyWeb Challenge

    Much can, and will, be said about the lazy web. One thing that I have learned is that the people talking about lazyweb are probably the least “lazy people” by any normal standards. It took Ben Hammersley only 4 hours to track down my comment about his offer (and I didn’t even trackback ping him), and now commands me (YES! You must! Now! Now! Now!) to come up with something. Guess I’d better do so then (does it work, the trackback?).

    Hmmm. What is a ‘lagom’ lazyweb idea, Ben? Lagom? I know you live in Sweden, and guess you have learned that word by now? No? Lagom defines the space between too much and too little, and is a very Swedish word. Almost like Danish ‘hygge’ (1, 2, 3), though meaning something else. It was one of the words that took me a while to understand, but once living in Sweden, one gets it quickly, I guess (I lived in Sweden 1997-2001).

    The challenge I am thinking of is perhaps too ambitious, but I have an idea that it is possible to make something really interesting with rss, trackback pinging, web services and stuff like controlled vocabularies (“A controlled vocabulary is a way to insert an interpretive layer of semantics between the term entered by the user and the underlying database to better represent the original intention of the terms of the user.”). Also, it would be interesting with some more plain/manual taxonomy and sematics, but this may be too scary for lazy people, so let’s no go too much into that. What I have in mind here is something more or less automated. I am not sure how this would work, and what will be needed to make it happen, but I can see lots of possibilities here (well, maybe not right there, but in the idea).

    Did I explain the idea in a way that makes any kind of sense? Probably not. Perhaps a quick scenario would help? That would go somewhere like this:

    Mid-morning, Lazy Peter gets a mail from his boss saying “working home, check this, write memo now, skip lunch if need be” containing a link. Peter visits this link. He finds it remotely interesting, but is lazy and decides to add it to his lazyblog and then go for lunch. Adding the link is done in two clicks – one on the bookmarklet “Lazy, Later!”, and one to confirm the addition. The bookmarklet fetches basic link information automatically, and he nearly never needs to edit anything in the form, so he is a bit annoyed with the confirmation click, but since the confirmation itself serves a purpose, he accepts this “waste” of time. After getting some coffee and a cigarette, Peter decides to take lunch before getting back to his boss. After lunch, Peter goes to his lazyblog. He clicks to the “Less Lazy” bookmarklet, which opens a window with his blog entry in a special context (LazyEdit mode). Unlike many others, who don’t like to use LazyEdit as the standard view setting, Peter likes the one-click availability of all the lazy services (he even runs it in Full Mode). His favorite on a really lazy day like this is the “Respond to boss” service. This presents findings from the “yzaliser”, the reverse of laziness, his energetic trackaround bot, which has analysed the link he submitted earlier, and now continuously tracks relevant information, news, other links, and of course, blogs that talk about the same issue, and offers automatically generated stuff like a response to his boss, illustrated below with imaginary codes:

    Dear boss,
    if LinkAuthorFOAF==BossName
    [Kiss-ass factor: Relative high]
    Thanks for pointing to the link Title (URL), which as you know is an interesting article about getCoreAttribute, that we should all read. It is a good point that getKeyword[0] is the central theme in getCoreAttribute, but that it is also important to beware of how getKeyword[1] plays a role, a fact the article explains well. The author has clearly read and understood the basic, important works on this issue, namely:
    get5GoogleSibblings.Keyword[1]
    As I read Title, it occured to me that getMostPopularAttribute.allSibblings.Title is something we should monitor more closely. I skipped lunch to create a new subsite on our public website with relevant resources, see the result here: open.window(get25GoogleSibblings.Keyword[1+2+3]).CreatePage.
    else
    Boring. get2GoogleSibblings.Keyword[1] are much better.
    endif
    I closely monitor news about getCoreAttribute, and also wants to draw your attention to this important news story:
    get1GoogleNews.Keyword[1].
    ifDaypopRanking.getRelatedLinks.FollowAll
    print “More:”+5RelatedLinks
    CheckRDFforTB2LazyWeb
    ifYes.MarkToSendTBPing
    forAll.createRSS4LazyVocabulary

    Yours,
    Peter

    Get it? Now it’s probably even more unclear what I am thinking of, because I got a bit carried away there, and mixed several things together there. At least these:

    * What I called getCoreAttribute, by which I specifically think of some vocabulary magic, is basically an idea of having a kind of More Like this from Others combined with Category sniffing and a bit of pattern thinking and “law and order” (as in controlled, standardised). Huh? You bet. In plain English: If I submit a link about apples, the system should find a way to know that I talk about fruits. It will be told that apples are fruits, either by its own vocabulary or by “learning” from others it “meets trackingaround”. I have no idea how this would work, so there’s the challenge!
    * The bookmarklets (at least the first; the second is imaginary, but easy to make) are of course based on my recent coding adventures. The code I produced actually does the work as described here, more or less. More so when more people start using metatagging.
    * The Google-stuff could be composed using web services. Google api will do some of the work, but it still doesn’t do the news service, does it?
    * Maybe, just maybe, WholeSystem would be useful in some kind of implementation of this (in MT) … OK, maybe not, but it looks pretty cool anyway.

    Want more challenges? I have already mentioned that I sure could use some help with my bookmarklet, if anyone out there is interested. On the more academic part of it, I must start looking at intellectual rights and would love some help here too. If I had written the code from scratch, I’d use it to learn more about GPL and CC and whatnot. But whose code is it? I’ve taken bits and pieces from a number of sources and put them together in a new context. That I got the idea via Jon Udell I mentioned, and I will at all times credit him. I also borrowed snippets from others. How do I licence the bookmarklet, and do so in a way that is lawyer-readable, human-readable and computer-readable?

    I’m also still struggling with more like this from others, and could need some assistance there, and would appreciate if any kind soul out there would give me a hand.

  • Million dollar markup?

    Lazywebbing, continued. I enjoy Jon Udell’s adventures with scripting an interactive service intermediary via his ever more creative bookmarklets. It is at times like this, I wish I was a better coder, because Jon’s ideas makes a lot of sense to me, and I want to do some practical coding. As mentioned earlier, I’m tempted to try the lazy way. But today has been strange, because as I sat down and started looking at code, it all made a lot of sense, and I am proud to say, that I have produced usable and in fact very useful code. This is going to be a bit technical, so click MORE to “Dive into Bookmarklets”.
    (more…)

  • Lazy is good

    LazyWeb is coming on as a strong candidate for what will come in the new year. Yesterday, Shelly Powers made a beautiful statement about what it’s all about:
    I think we’re seeing a new form of open source development, based on technology developed for the community and its immediate, expressed needs. A case of community searching for technology rather than technology on the hunt for a users.

    According to LazyWebwiki, the term was coined by Matt “blackbeltjones”, who said “if you wait long enough, someone will write/build/design what you were thinking about”. As Ben Hammersley learned (and taught?), the lazyweb is a rather perfect match with current developments in the blogging community, where we see so much creativity and peer-collaboration.

    It’s tempting to start trackback pinging Ben’s “giftshop” with various ideas. I’m not a coder, but have code ideas, so watch out, Ben 😉

    To be continued …

  • Code vs. data

    Mark Pilgrim‘s play with the cite tag was a good demonstration of the power of semantic markup, or Million dollar markup, as he calls it in a follow-up.

    He referes back to Jon Udell’s post about Google’s co-founder, Sergey Brin, who at a conference talked about RDF and the Semantic Web, and there said, “Look, putting angle brackets around things is not a technology, by itself. I’d rather make progress by having computers understand what humans write, than by forcing humans to write in ways computers can understand.”.

    Google is using a code-centric approach, Mark argues: “Google doesn’t really have a choice. It’s their code, but it’s not their markup. So of course they’re going to invest money in code. It’s far more cost-effective to throw money at your own code than to try to get millions of independent developers to change their ways just to make your life a little easier.”

    Mark’s own cite tag experiment is data-centric.
    “This is the point: if you have million-dollar markup, you don’t need million-dollar code, and vice versa. But they’re not mutually exclusive, either; it’s a spectrum, and where you fall depends on what you need. Neither Sam’s code-centric approach nor my data-centric approach is inherently better. They both accomplish the same short-term result. Which approach is better in the long run depends on whether you are more likely to re-use the content or the code that parses the content.”

    Good point, Mark.

  • Get creative

    Get Creative. Skip the intermediaries. So says Creative Commons, which aims at making a digital rights language that is lawyer-readable, human-readable and computer-readable. No small challenge – two of the three is enough of a challenge in my experience.

    Back in 1997, when I started working for the Swedish government, one of my first projects was to help create a new legal information system, for lawyers, humans (citizens) and machines. When we in 1998 published a green paper about the system, we laid the foundation for what has become Lagrummet. The project was never able to please both lawyers, humans and machines, and still isn’t. Critics, of which we had many, would say it pleases neither. Of course, the original idea – basing it all on XML – is still just a dream, and far from practice. We were on to it early on, at the time when neither lawyers, humans nor machines knew what XML is. Some of these groups have caught up since then, of course.

    Ah, word of the day: cobloggaration.

  • Really Simple Discoverability

    XMLifying my site, take 5: Really Simple Discoverability (RSD) is “a way to help client software find the services needed to read, edit, or “work with” weblogging software….The goal is simple. To reduce the information required to UserName, Password, and homepage URL.”

    The RFC was made by Daniel Berlinger less than two months ago, and already meets support from a number of blogging tools (first Archipelago, for which it was designed, then Radio, and then Ben Hammersley brought it to Movable Type).

    As I understand from reading various blogs, RSD does for blogs what WSIL does for web services in general. WSIL, or WS-Inspection, is one of the (several) parts of the web services technology stack. Timothy Appnel has an excellent introduction to WSIL. He writes:

    “While similar in scope to the Universal Description Discovery and Integration (UDDI) specification, WSIL is a complementary, rather than a competitive, model to service discovery…

    WSIL documents are located using a few simple conventions using existing Web infrastructure. In many ways, WSIL is like a business card — it represents a specific entity, its services, and contact info, and is typically delivered directly by whom it represents.”

    So, why RSD? I think the answer may be, that it is built for purpose and immediate implementation, just like RSS was. So, let’s give it a chance. It will be easy to migrate to WSIL, if that will be necessary, but for now, the developers seem to be doing wonders with stuff like RSD and RSS. I don’t know how many WSIL-files there are out there already. Not a lot, I think, and my prediction for 2003 is that we’ll see far more RSD-files than WSIL-files.

    Read Sam Ruby‘s comments, and all the comments to the comments for more information on RSD.

    So, Slashdemocracy Blogging Networks now supports auto-discóvery of the RSD. Besides being yet another XML-file here, which is good to the extend I measure value by numbers, I have no real use for the file. Yet.

    Previous XMLify Slashdemocracy takes: 1, 2, 3, and 4.

  • METAPredictions

    Reuters: META Predicts Microsoft Will Offer Linux Software

    The Register: MS un-denial leaves Linux option open

    META Groups think that by 2006/07, Linux on Intel (“Lintel”) will be on 45% of new servers (Intel will be on 95%+ of new servers). They also think Microsoft will enter this market, especially via .NET.

    META’s Bottom Line: Linux will have significant public-sector acceptance (especially overseas), which will encourage further commercial developments. By 2005/06, public-sector deployments (typically overseas) will provide the basis for accelerated Linux (and other open source) implementations.

    In related news, The Register brings another interesting story about how Microsoft brings together Office 11 and .NET. This is important and expected news. It is adding to the ongoing debates about open document formats in interesting new ways. If our office applications used open standard-based web services, the interoperability between brands would be eased. Then we’re not sharing documents, but services. Cool. Maybe. Let’s see.

  • Readable and browsable XML

    The XMLify Slashdemocracy venture continues. I’m now styling my XML, making it more human readable and browsable. Look at the page souce, it’s pure and unadulterated XML, with a bit of style added to it, using CSS (idea: W3C and Mozquito). Besides the point that old browsers will choke on the XML, it’s a pretty neat thing.

  • At your service, in 2007

    A new services-oriented view of infrastructure is required to ensure a successful long-term transition to effective Web services for the enterprise, according to META Group, who sees web services as essentially another (better) attempt at deploying and linking applications using open standards. Web services basically represents a shift from a component-oriented to a service-oriented infrastructure architecture. The analysts predicts that web services will be significantly deployed first within the internal enterprise during 2002 and 2003, and from 2004 between enterprises. During 2002 and 2003, organisations will begin experimenting with elements of Web services, attempting to identify critical infrastructure issues and assessing the networkability of applications developed in those environments. As Web services mature, from 2004 to 2006, application networkability assessments (ANAs) will increasingly be used to rate both purchased Web services solutions and outsourced Web services. After 2007, a unified Web services network architecture will share a standard access method to those services.

    The clue: viewing infrastructure as a collection of components and patterns will expand to include application, technical, and operational services concepts. For Web services, the unit of composition is not defined via an application programming interface, but as a pattern of interactions between providers and subscribers and the information exchanged during those
    interactions (i.e., protocols and formats).

    W3C’s Web Services Architecture Working Group has published a Working Draft of 14 November 2002 where they define a web service as follows:
    A Web service is a software system identified by a URI, whose public
    interfaces and bindings are defined and described using XML. Its definition
    can be discovered by other software systems. These systems may then interact
    with the Web service in a manner prescribed by its definition, using XML
    based messages conveyed by internet protocols

    Eric van der Vlist recently wrote an article, SOAP Web Services: built on a contradiction?, where he argues that the RPC approach which is one of the selling points of SOAP can’t be called loosely coupled, and that the infrastructure needs to be totally refactored to be suited for Web Services. ::Manageability:: has a nice comparison between the old and the new:

      Tight Coupling Loose Coupling
    Interface Class and Methods REST like (i.e. fixed verbs)
    Messaging Procedure Call Document Passing
    Typing Static Dynamic
    Synchronization Synchronous Asynchronous
    References Named Queried
    Ontology (Interpretation) By Prior Agreement Self Describing (On The Fly)
    Schema Grammar Based Pattern Based
    Communication Point to Point Multicast
    Interaction Direct Brokered
    Evaluation (Sequencing) Eager Lazy
    Motivation Correctness, Efficiency Adaptability, Interoperability

    And concludes: “[I]f the goal is to achieve better interoperability then we need to deviate from the familiar and be prepared to make some hard choices.”