Thursday, November 30, 2006

Literary Confusion according to Babel


As mentioned in a previous article, I decided also to write about (i.e. to review) the books I'm currently reading. I realized that my ability to express myself in English if it comes to the authoring of non scientific content is rather limited. Also I was thinking about whether it makes sense to write (in English) about books (written in German)...esp. if I want to address a German speaking audience (at least concerning the non-scientific content of this blog). Alas, the new beta-blogger allows the use of tags and thus, it will be possible to categorize articles (and to distinguish between content written in English or German). To make it short: from now on, book reviews for books of non-scientific content that are written in German will also be written in German. Everything else (including book reviews of books that I have read in English) will be written in English.
To all those who are also interested in those book reviews, you might consider to use babelfish for translation (although I have no idea about the result:)...

Als ich vor wenigen Wochen die ISWC besuchte, hatte ich mich ja bereits darüber beklagt, dass meine derzeitige Lektüre für das Handgepäck schlicht und einfach zu 'schwer' sei. Folglich stand ich vor dem Problem, was ich während der jeweils über 9 Stunden dauernden Flüge lesen sollte (natürlich habe ich währendessen auch geschlafen...so gut es eben ging...). Normalerweise lese ich nicht gerne mehrere Bücher parallel, und da ich es vermeiden wollte, nach meiner Reise mit zwei angefangenen Büchern dazustehen, entschied ich mich für Kurzgeschichten bzw. Erzählungen. Im Bücherschrank stand schon seit längerer Zeit ein kleines Bändchen mit Erzählungen von Thomas Mann, für das zu lesen ich bis dato noch nicht die Muse aufbringen konnte. Da es (zumindest gewichtstechnisch) den Anforderungen an meine Reiselektüre entsprach, durchlebte ich also während des Fluges die Welt von 'Tonio Kröger', schmeckte den Fluch des 'Wälsungenblutes' und machte mich als Hunden gegenüber völlig indifferenten Zeitgenossen mit den Abgründen der Beziehungen zwischen 'Herr und Hund' vertraut....

Also gut, vielleicht sollte ich damit beginnen zu erwähnen, dass ich im Deutschunterricht niemals mit den Werken von Thomas Mann als Lektüre Bekanntschaft schließen musste. Viele meiner Bekannter stöhnen bereits laut auf, sobald nur der Name 'Thomas Mann' fällt...sicherlich, da sich unliebsame, bereits verdrängte Erinnerungen an die Untiefen der im Schulunterricht bis zum Erbrechen interpretierten und analysierten Soap-Opera 'Buddenbrooks' ihren Weg zurück an die Oberfläche bahnten. Aber nicht bei mir. Die Buddenbrooks hatte ich das erste mal im 'zarten Alter' von knapp 30 Jahren vor mir. Ich dachte, "das reicht erst mal für die Weihnachtswoche", nur zog mich dieses Monstrum von einer Familiensaga derart in seinen Bann, dass ich es nach zweieinhalb Tagen (leise seufzend, da es 'schon' zu Ende war) wieder zurück ins Regal stellte. Allerdings kam neulich auf der Frankfurter Buchmesse während eines gemeinsamen Mitttagessens die Rede auf die wohl "am meisten überschätzten" deutschen Autoren. Nachdem ich meiner Nachbarin gegenüber mein Unverständnis ihrer Einschätzung, dass Fontane dabei ganz oben auf ihrer Liste stehe, zum Ausdruck brachte, fiel mir daraufhin der 'Der Zauberberg' und insbesondere der 'Doktor Faustus' ein. Ohne mich jetzt hier zu vertiefen sei mir kurz der Hinweis gestattet "Thomas Mann hätte sich in meinem Gedächtnis sicherlich einen besseren Ruf behalten, hätte er zumindest auf das Schreiben des Letzteren der beiden genannten Romane verzichtet...".

Tonio Kröger bietet dabei alle Höhen und Tiefen der Mann'schen Erzähltradition in kondensierter Form. Die relativ kurze Erzählung beginnt mit der Geschichte der Kindheit und des Heranwachsens Tonio Krögers in allerbesster kurzweiliger 'Buddenbrooks-Manier' und ergießt sich in der zweiten Hälfte in einer Introspektive (-> siehe Zauberberg) Tonios in seiner Rolle als Künstler (im Zwiegespräch und in Briefen an seine Künstlerfreundin Lisaweta). Im letzten Drittel unternimmt unser Held eine Reise zurück an die Stätten seiner Kindheit und beginnt zu begreifen, dass er als Künstler sehr wohl Teil der von ihm verächtlich abgelehnten 'Gesellschaft' ist und dass Gefühle einen Künstler nicht in seiner Arbeit hemmen, sondern gar bestimmen....(-> Wandlung siehe Doktor Faustus).
Fazit: Ein Kurztripp durch Thomas Manns Mikrokosmos, sehr zu empfehlen für alle, die in seine Welt mal 'kurz hineinschmecken' wollen, ohne das Wagniss eingehen zu müssen, sich einem der vorgenannten 'Monstren' zu stellen :)

Das Wälsungenblut ist da schon etwas anders gestrickt - auch wenn die eindrucksvoll plastische Schilderung dieser etwas kurios dekadenten Bankiersfamilie Aarenhold stellenweise an die Addams-Family erinnert.... Thomas Mann karikiert dabei den schwülstige Pathos der Opern-Atmosphäre Richard Wagners - hier ganz speziell die der Walküre. Geschildert wird die inzestiöse Liebe des Wälsungen-Geschwisterpaares Siegmund und Sieglinde, wobei es Mann gelingt, eine durch Liebe zum Detail perfekte Inszenierung abzuliefern, die vom Kunstgenuss des Opernzitats mit Kognak-Kirsche bis hin zum Liebesreigen auf dem Bärenfell (-> siehe Wagners Walküre) reicht.
Fazit: ein kurzweiliges Stück skuriler Prosa, in dem sich Thomas Mann als Meister der pointierten Schilderung und 'Anti-Wagnerianer' erweist...

Tuesday, November 28, 2006

UIMA - Unstructured Information Management Architecture

This morning, we were invited to a talk given by Thilo Götz from IBM about UIMA (Unstructured Information Management Framework), IBM's Framework for the Management of unstructured information that happened to take place at the department of computer linguistics.
UIMA represents (1) an architecture and (2) a software framework for the analysis of ustructured data (just for the record: structured data refers to data that has been formally structured, a.g. data within a relational database, while unstructured data e.g., refers to text in natural language, speech, images, or video data). The purpose of UIMA is to provide a modular framework that enables easy integration and reuse of data analysis modules. In general, the UIMA framework distinguishes three steps in data analysis:

(1) reading data from distinguished sources
(2) (multiple) data analysis
(3) presentation of data/results to the 'consumer'

Also it enables remote processing (and thus simple parallelization of analysis tasks). Unfortunately, at least up to now, there is no GRID support for large scale parallel execution.
Also, simple applications of UIMA, e.g. in semantic search were presented (although their approach to semantic search means: do information retrieval on unstructured data and fit the resulting data into the index of the 'semantic search engine'...)
Nevertheless, we will take a closer look at UIMA. We are planning to map the workflow of our automated semantic annotation process (see [1]) into the UIMA architecture and I will tell you about our experiences made....
UIMA is available as a free SDK, and the core Java framework is also available as open source.

References:
[1] H. Sack, J. Waitelonis: Automated Annotations of Synchronized Multimedia Presentations, in Proceedings of Mastering the Gap : From Information Extraction to Semantic Representation (MTG06 / ESWC2006), Budva, Montenegro, June 12, 2006.

Tuesday, November 21, 2006

Document Retrieval vs. Fact Retrieval - In Search for a Qualified User Interface


Today, if you are looking for information in the Web, you enter a set of keywords (query string) into a search engine and in return you will receive a list (= ordered set) of documents that are supposed to contain those keyword(s) (or their word stem). This list of documents (therefore 'document retrieval') is ordered according to the document's relevance with respect to the user's query string. 'Relevance' - at least for Google - refers to PageRank. To make it short, PageRank reflects the number of links referring to the document under consideration, each link weighted with its own relevance being adjusted by the number of total links starting at the document that contains this link (in addition with some black magic that is still under copyright restriction, see U.S. Patent 6285999).
But, is this list really what the user expects for an answer? O.k. meanwhile, we - the users - have become used to this kind of search engine interface. In fact, there exist books and courses about how to use search engines in order to get the information you want. Interesting fact is that it is the user, who has to get adapted to the search engine interface....and not vice versa.
Instead it should be the other way around. The search engine interface should get adapted to the user - and even better to each different user! But, how then should a search engine interface should look like? In fact, there are already search engines that are able to give the answer to simple questions ('What is the capital of Italy?'). But, they stil fail in answering more complex questions ('What was the reason for Galileo's house arrest?').

In real life - at least if you happen to have one - if you are in need for information, you have different possibilities to get it:

  1. If there is somebody you can ask, then ask.
  2. If there is nobody to ask, then look it up (e.g. in a book).
  3. If there is nobody to ask, and if there is no way to look it up, then think!

Let's consider the first two possibilities. Both do also have their drawbacks: Asking somebody is only helpful, if the person being asked does know the answer. (O.k., there is also the social aspect that you might get another reward just by making social contact...instead of getting the answer). If the person does not know the answer, maybe she/he knows, whom to ask or where to look it up. But we might consider this fact as being a kind of referential answer. On the other hand, even if the person does know the answer, she/he might not be able to communicate the answer. Maybe you speak different languages (not necessarely different languages in the sense of 'English' and 'Suaheli', but also consider a philosopher answering the question of an engineer...). Sometimes you have to read in between the lines to understand somebody's answer. At least, in some sense we have to 'adapt' to the way the other person is giving the answer to understand the answer.
Considering the other possibility of looking up the information, we have the same situation as if asking the www search engine. E.g., if we look up an article in an encyclopedia, we use our knowledge of how to access the encyclopedia (alphabetical order of entries, reading the article, considering links to other articles...being able to read...).
Have you realized that in both cases we have to adapt ourselves to an interface. Even when asking sombody, we have to adopt to way this person is talking to us (her/his level of expertise, background, context, language, etc.). From this point of view, adapting to the search engine interface of Google seems not to be such a bad thing at all....

If it comes to fact retrieval, the first thing to do is to understand the user's query. To understand an ordinary query (and not only a list of interconnected query keywords), natural language processing is the key (or even as they say the 'holy grail'). But even, if the query phrase can be parsed correctly, we have to consider (a) context and (b) the user's background knowledge. While the context helps to disambiguate and to find the correct meaning of the user's query, the user's background determines its level of expertise and the level of detail in which the answer is best suited for the user.

Thus, I propose that there is no such thing as 'the perfect user interface'. Anyway, different kind of interfaces might serve for different users in different situations. No matter how the interface will look like, we - the users - will adapt (because we are used to do that and we learn very quickly). Of course, if the search engine is able to identify the circumstances of the user (maybe she/he's retrieving information orally with a cell phone or the user is sitting in front of a keyboard with a huge display) the search engine may choose (according to the user's infrastructure) the suitable interface for entering the query as well as for presenting the answer...

WebMonday 2 in Jena - Aftermath


Yesterday evening the 2nd WebModay took place in Jena Intershop Tower. I thought that the number of participants that happend to come by the last time could not be surpassed (we had almost 50 people up there), but belief it or not, I counted more than 70 people this time! Lars Zapf moderated the event and we had 4 interesting speakers this evening.
For me, the most interesting talk was the presentation of Prof. Benno Stein from the Bauhaus-University Weimar about Information Retrieval and current projects. He was addressing the way how we are using the web today for retrieving information. Most current search engines are only offering 'document retrieval', i.e. after evaluating the keywords given in the user's query string the search engine presents an ordered list of documents that the user has to read in order to get the information. Instead, the more 'common' way to get information would be to ask a question and to receive an 'real' answer (= fact retrieval). I will discuss these different types of 'user interfaces' in an upcoming post. Interesting thing to mention is that Weimar is so close to Jena and both our research really seems to have some interconnections (thus, this new contact might be considered to be another WebMonday's networking success).
After that, Matthias Leonhard was giving the first part of a series of talks related to Microsoft's .NET 3.0.
Then, Ryan Orrock addressed the problem of 'localisation' and translation of applications. If translating an application into another language, simple translation of all text parts is not sufficient. There are also different units of measure to consider as well as the adaption of screen design, if texts in different languages have diferent sizes.
In the last presentation Karsten Schmidt was addressing networking with openBC/Xing, an interesting social networking tool that is supposed to make business contacts.(At least, now I know that I need some other tool to store (physicaly) my (and other people's) business cards :) ).
Even more interesting was - as always - the socializing part after the presentations. Markus Kämmerer made some photos .

Here you can find other blog articles on the 2nd WebMonday:

Wednesday, November 15, 2006

wikipedia to serve as a global ontology....



Today, I met Lars Zapf for a quick coffee enjoying the rare late afternoon november sun. We were exchanging news about ISWC, WebModay, recent projects, and stuff like that. While talking about semantic annotation, Lars pointed out that instead of using (or developing) own ontologies for annotating (and authoring) documents, you could also use a wikipedia reference to indicate the semantic concept that you are writing about. Thus, as he already wrote in a comment, e.g., you could use the link http://en.wikipedia.org/wiki/Rome to indicate that you are refering to the city of Rome, the capital of Italy.
Of course you might object that there are several language versions of wikipedia and thus, there are several (different) articles that refer to the city of Rome. To use wikipedia as a 'commonly agreed and shared conceptualization' - to fulfill at least some points of Tom Gruber's ontology definition as long as wikipedia lacks the 'formal' aspect of machine understandability - we can make use of the fact that articles in wikipedia can be identified with articles in other language versions with the help of the language indicators at the lower left side of wikipedia's user interface. To serve as a real ontology, each wikipedia article should (at least) be connected to formalized concept (maybe encoded in RDF or OWL). This concept does not necessarely have to reflect all the aspects that are reported in the natural language wikipedia article. E.g., Semantic Media Wiki is working on a wiki extension to capture simple conceptualizations (such as e.g. classes or relationships).
An application for authoring documents could easily be upgraded by offering links to related wikipedia articles. If the author enters the string 'Rome', the application could offer the related wikipedia link to Rome [or any selection of related offers] and according to the authors this link can be automatically encoded as a semantic annotation (link).
O.k., that sounds pretty simply. Are any students out there to implement it (anybody in need for credit points??)? I would highly appreciate that...

International Semantic Web Conference 2006 (ISWC 2006) - Aftermath


Back home again, jetlag is almost gone while already travelling to Potsdam again for a talk on 'Semantic Annotation in Use'....After all, ISWC 2006 was a very nice and interesting conference...although being set up at a rather remote location (at least from my point of view). One of its highlights (as already pointed out) was the panel discussion about web 2.0 and semantic web. Leo Sauermann was raising the question, why there is such a bad marketing of semantic web technology. Obviously - as TBL was replied - because the W3C invests all funding into hiring scientist and not marketing people. One of the major problems is that semantic web applications don't look 'cool' and 'sexy' ... and therefore, they don't get public attention. BTW, right at the same time, the 3rd Web 2.0 conference took place in San Francisco. Why did the conference organizers of ISWC not try to organize a panel discussion with a live connection between the two conferences? At ISWC of course there were only 'Semantic Web'-people and (at least as far as I have realized) nobody from the 'Web 2.0'-community. Allright, you can be both SemWeb and Web2.0. But, as long as you are are focused on Semantic Web most people will have common focus (and argument) on Web 2.0. Another point of view for sure would have been interesting to listen to.

Closely related to the lack of marketing question is the question about the Semantic Web killer application. Nobody knows what type of application it will be - of course ... otherwise the application would already be there. But, as for all killer applications, it will not necessarely be somthing 'really' useful :) If you consider that the killer application for the WWW (at least for the time in its early beginnings back at CERN) was the 'telephone book'. Not to mention the sms and the mobile phone. Maybe the semantic web killer application will be related to rather ordinary applications such as a dating service that is really able to find a match....

BTW, I have switched to the new beta release of blogger.com... (comments should be working now - at last! - and also keywords)...and the other guy on the picture is Ulrich Küster (also from FSU Jena) at the ISWC dinner reception...

Thursday, November 09, 2006

International Semantic Web Conference 2006 (ISWC 2006), Athens (GA), USA - Day 3


Thursday, the last day of ISWC started with a keynote of Rudi Studer from the University of Karlsruhe on 'The Semantic Web: Suppliers and Customers'. He drew the historic connection from databases to the Semantic web as being a web of human-readable content connected to machine-interpretable semantic resources. He also pointed out the importance of interdisciplinary research for realizing the Semantic Web, while on the other hand, the Semantic Web also contributes to other disciplines and comunities. After that I was listening to an interesting talk of Andreas Harth from DERI on 'Crawling and Indexing the Semantic Web', where he introduced an architecture for a semantic web crawler and gave some first results.
The most interesting talk of the day was the talk of Ivan Herman from W3C (here you can find his foaf data) on 'Semantic Web @ W3C: Activities, Recommendations and State of Adoption'. He proposed 2007 to be the 'year of rules', because finally, we might come to a recommendation concerning rule languages for the Semantic Web. He also mentioned the efforts of integrating RDF data into XHTML via RDFa or - vice versa - to get RDF data out of XHTML with GRDDL.
The ISWC closed with the announcement of the best paper awards and the winners of theis year's semantic web challenge.
If you are interested in the conference, you might have a look at the video recordings of the talks.

International Semantic Web Conference 2006 (ISWC 2006), Athens (GA), USA - Day 2


Wednesday...the 2nd day of ISWC started with a keynote of Jane E. Fountain from the University of Massachussetts in Amherst about 'The Semantic Web and Networked Governance'. From her point of view, Governements have to be considered as major information processing [and knowledge creating] entities in the world, and she was trying topoint out the key challenges faced by governements in a networked world (for me the topic was not that interesting...). Also today's sessions - at least those that I have attended - were not that exciting. I liked one presentation given by Natasha Noy from Stanford on 'A Framework for Ontology Evolution in Collaborative Environments' in the 'Collaboration and Cooperation' session. She presented an extension of the protégé ontology editor for collaborative ontology development.
The most interesting session for me was the 'Web 2.0' panel in the afternoon. Amon the panelist were Prof. Jürgen Angele (Ontoprise), Dave Beckett (Yahoo!), Sir Tim Berners-Lee (W3C), Prof. Benjamin Grosof (MIT Sloane School of Management), and Tom Gruber. The panelwas discussing the role of semantic web technology for web 2.0 applications.


Jürgen Angele pointed out that the only thing that is really new about web 2.0 is ad-hoc remixability. Everything else is nothing but 'old' technology. But, as he stated, web 2.0 could be a driving force for semantic web technology.

Dave Beckett made some advertising for Yahoo! in the sense that he was pointing out that Yahoo! indeed is making use of semantic web technology (at least in their new system called Yahoo!Food) and Yahoo! is a great participation platform with more than 500 million visitors per month.

Tim Berners-Lee gave a survey on the flaws and drawbacks of web 2.0 and how semantic web technology could help. While web 2.0 is not able to provide real inter-application integration, the semantic web on the other side does not provide such cool interfaces to data. Together both in combination, they could become interesting.
All so called new aspects of web 2.0 have already been the goals of the original web (1.0), as easy creation of content, collaborative spaces, intercreativity, collective intelligence from designing together, creating relationships, reuse of information, and of course user-generated content. Web 2.0 architecture consists of client side (AJAX) interaction and server side data processing (aka the good old 'client-server'-paradigm) and mashups (one per application / each needs coding in javascript, each needs scraping/converting/...). Essentially, web 2.0 is fully centralized. So, why are skype, del.icio.us, or flickr websites instead of protocols (as foaf is)? The reuse of web 2.0 data is only limited to the hostside. Only with the help of feeds, data are able to break out from centralized sites. What will happen with all of your tags? Will they end up as simply being words or will they become real (and usefull) URIs?
With semantic web technology, web 2.0 enables multiple identities for you. You may have many URIs, enabling you to access different sorts of data, to fullfill different expectations concerning trust, accuracy, and persistence. In the end, web 2.0 and semantic web while being good seperately could be great together!

Benjamin Grosof asked, where semantic web technology could help web 2.0. He focused on backend semantic integration and mediation (augment your information via shallow inferences), collaboration and semantic search. Semantic search will enable you a morhuman centered search interface, as e.g., 'Give me all recipes of cake....but I don't like any fruits' and 'I want a good recommendation from a well reputed web site'. He sees semantic web technology piggyback on web 2.0 interactions ('web 2.0 = search for terrestrial intelligence in the crowd' :) The semantic web should exploit web 2.0 to obtain knowledge.

Tom Gruber was asking 'Where is the mojo in Web 2.0?'. He characterized web 2.0 as being a fundamentally democratic architecture, driven by social and entertainment payoffs (universal appeal...), while the web 1.0 business model actually keeps working ('attention economy'). He was discussing the way from today's 'collected intelligence' to real 'collective intelligence'. He concluded 'don't ask what the web knows....ask what the world knows!' and 'don't make the web smart...make the world smart'.

Wednesday, November 08, 2006

International Semantic Web Conference 2006 (ISWC 2006), Athens (GA), USA - Day 1


Tuesday morning 9 a.m. ... the ISWC 2006 starts with the keynote of Tom Gruber (godfather of computer science based definition of the term 'ontology') on 'Where the Social Web Meets the Semantic Web'. He focused on 'Collective Intelligence' as being the reason that companies as google or amazon did survive the first Dot-com bubble, because they where making use of their users' collective knowledge. Google uses other people's intelligence by computing a page rank out of the users' links to other webpages. Amazon uses the people's choices for their recommentation system, and ebay uses the people's reputation. Interesting thing about that is that the notion of 'Collective Intelligence' (aka 'Social Web', aka 'Web 2.0') - was already addressed by Douglas Engelbart in the late 60's. Engelbart did not only invent the mouse, the window-based user interface, and many other important things that are part of today's computing environment, his driving force - as Gruber said - was 'Collective Intelligence'....to cope with the set of growing problems that humanity is facing today. Thus - as I have also stated in another post - also the semantic web depends on collaboration and participation of the users and therfore, on 'Collective Intelligence' to become a success.

BTW, I prefer using the term 'Social Web' instead of 'Web 2.0'. From my point of view 'Social Web' hits exactly the point and does not suggest any new and exciting technology (but only the fact that people are using existing web technology in a collaborative way to interact with each other).

After the keynote I visited the 'Knowledge Representation' session with an interesting talk of Sören Auer on OntoWiki (a semantic wiki system .. interesting, because one of my students is alsoimplementing a semantic wiki). In the afternoon sessions I esp. liked the talks about representation and visualization (esp. the talk of Eyal Oren on 'Extending faceted navigation for RDF data', where he presented a nice server application that is able to visualize arbitrary RDF-data). In the evening, a dinner buffet (including cuban music) was combined with the poster sessions and the 'Semantic Web Challenge' exhibition, where I found the possibility for a cooperation with Siegfried Handschuh from DERI (on semantic authoring and annotation....).

Oh...I already forgot to mention that there is also a flickr group with ISWC photographs...

Monday, November 06, 2006

International Semantic Web Conference 2006 (ISWC 2006), Athens (GA), USA - Day 0


The very first day here at the ISWC...ok, it's the workshop and tutorial day. Officially, the conference will start tomorrow morning with Tom Gruber's keynote. I did already arrive here in Athens at Saturday. The 10 hours flight from Frankfurt was really exhausting...at least I had no stop-over. Athens is about 90 minutes away from Atlanta and it is famous for her university, which is the oldest public funded university of the US. It has a really nice historic campus (I will provide some pictures later on).
Today started with the "1st Semantic Authoring and Annotation Workshop" (SAAW 2006), where I had two papers to present...two day ago I was told (by email) that the short presentations will be 'lightning talks' of 5 minutes length each. I had prepared slides for some 15 minutes talks :) ...and was a little bit 'pissed off' by throwing away all the 'interesting stuff'. But, at least I could raise interest of a few people. The afternoon's workshop (on Web Content Mining with Human Language Technologies) also had some interesting topics. Esp., I liked the talk of Gijs Geleijnse about 'Instance Classification using Co-Occurrences on the Web'. It was about classifying musicians and artists (as instances) with their genre (as concepts) by finding co-occurence relationships of terms with the help of Google.

Wednesday, November 01, 2006

From the 'Hiroshima Gate' to 'Confusion'......


O.k....I decided to start putting reviews of currently read books into the blog. The very first one is 'The Hiroshima Gate' by Ilkka Remes. I don't think it is already available in the U.S....I guess the reason for this is that the book originally is written in Finnish, and the stories told by Ilkka Remes put a Finnish executive of the European Community - Timo Nortamo - in the focus. Timo Nortamo is occupied by solving a strange case of murder that happend in Paris. While delivering a disk with secret KGB data (old but seemingly important data from the times of the cold war), a women jumps from a bridge into the river Seine...and is found with her throat being cut. But, besides the KGB data - which might put Finnlands current prime minister under suspect of having conspired with the KGB - the disk contains additional data that attracts secret agencies from around the world...you have to listen to a lot of conspiracy theories ranging from aliens from outer space (Erich von Däniken revisited...), ancient superior civilizations, up to anti-mater bombs....
Well...in the end, everything turns out to be rather down-to-earth (without giving away the story...). An interesting (well...not really. I don't like theese very short chapters...just cliff hangers and cliff hangers again...) mixture of Dan Brown and Michael Crichton, but less mystical and less scientific (if you can say so). Remes introduces you into a Europe-centered world with the USA and China acting as the villaina. The characters remain pretty flat, although you are reading a lot about our hero's family problems (marriage and cheating, father-son conflicts, alcohol problems, or Nortamo loosing his temper... ). I wouldn't read the book for a second time...thus, I guess, this means at most 2-3 stars out of 5.
Yesterday, I began to read 'Confusion', the second book of Neal Stephenson's Baroque Cycle. Unfortunately, it is much too heavy (almost 1.5 kg) for taking it with me on the flight to ISWC (International Semantic Web Conference) at Athens (GA, USA) on Saturday....