After the dinner cruise along the river Spree, the second day of Adaptive Multimedia Retrieval 2008 again starts with an interesting invited talk on the European answer to Google search engine technology - THESEUS.
Karsten Müller from Fraunhofer Heinrich-Hertz-Institute is presenting on "THESEUS Project - Applications and Core Technologies for the Semantic Web". First, Karsten makes clear, that THESEUS doesn't want to be Google ;-) THESEUS is a research program for a new internetbased knowledge infrastructure....which from my point of view means nothing else but "the semantic web"....
One part of the THESEUS project is ALEXANDRIA, the virtual library, being lead by Yahoo! with the objective of semantic processing of different forms of content to enable faster access to relevant content, which again means an increase in information quality. Concepts such as an automated tagging framework (including language error correction, synonym & tag merging, and topic focussing, identification of semantic relations), innovative navigation (by presenting thematically related contents) and interaction concepts are involved.
Another part is ORDO, which deals with "Organizing your digital life" with the goal to unify various data formats, multilingual information, structured and unstructured data on the web to enable homogeneous information sources.Problems such as separating important from unimportant, ordering information instead of searching, priorization, identification and visualization of interrelations are addressed.
TEXO is another part with the objective of "Realizing the internet of services" (being lead by SAP Research), offering personalized customized services, community involvement to improve services, as well as a smooth & seamless (userfriendly) adaption and integration of services.
PROCESSUS deals with the "Optimization of business processes" aiming for the objective of anytime providing the user with theright information at any stage of the business process.
MEDICO is another subproject dealing with "Towards Scalable Semantic Image Search in Medicine" and being lead by Siemens.
CONTENTUS, as being the last Use case "Content access and generation from cultural institutions is lead by the Deutsche Nationalbibliothek. Being part of CONTENTUS are tasks such as Digitizing books as well as audiovisual material (including the German Music Archive in Berlin) protecting the cultural heritage. The goal is the semantically interlinked collection of content to achieve a next generation multimedia library.
.....impressive and ambitious project!
The upcoming section this morning is on "Image Tagging" and Marius Renn (at least I hope so) from TU Kaiserslautern is givig a presentation on "Automatic Image Tagging using Community-Driven Online Image Database". Automatic image tagging requires a lot of training data....and flickr is delivering tons of tags per day...but are these flickr data really good candidates for learning? So, in the end, unfiltered community image sets directly do not provide satisfying results. Alas, these databases at least allow large scale image aggregation...
The next talk in this session is given by Christian Hentschel from Fraunhofer HHI Berlin about "Automatic Image Annotation Refinement using Object Co-Occurences". Again, flickr is the target image set with its huge collection of more than 2 billion images, growing by 3 million photos every day. Objects always appear and are perceived in a semantic context.
The following session is on "Symbolic Music Retrieval" and starts with Rainer Typke from Austrian Research Institute for Artificial Intelligence (ÖFAI), but I had to skip this talk. Anyway, the samples of the reduced MIDI files were quite interesting (although I'm not a fan of the Scorpions!). OK, I had to ask afterwards about the usefulness and application of his approach. In music retrieval it can be used to reduce the index size down to 30% of the original index. Also QBE-processing will become much easier while on the other hand you might connect this MIDI-collection to real music files.
The last talk of the morning session is given by Giancarlo Vercellesi from University of Milan on "Automatic synchronization between audio and partial music score presentation". He presents the ParSi architecture, which perfoms an alignment of PCM signal and partial MIDI scores.
The afternoon session is simply entitled with "Systems". Fernando Lopéz from Madrid is giving a presentation on "Towards a fully MPEG-21 compliant adaption engine: complementary description tools and architectural models". Within the MPEG-21 framework several aspects of metadata-driven adaption is not clearly covered. He introduces CAIN, a tool for adapting Digital Items e.g. to different output devices.
The session continues with a presentation on "Mobile museum guide based on fast SIFT recognition" with the objective to identify paintings in galleries simply with the help of mobile pattern recognition without any extra installation on site. The SIFT (Scale Invariant Feature Transform) method is a rather cool algorithm for detecting local features within images that are used to map photographs taken with your PDA or mobile phone in the image gallery with reference pictures from a given database. And actually the live demo did work :)
I guess, we will also use the SIFT-algorithm in yovisto for synchronization of ppt/pdf-slides with the lecture video.
For the last session - "Structuring of Image Collections" - only one speaker showed up. Marc Gelgon is presenting on "Geo-temporal structuring of a personal image database with two-level variational Bayes mixture estimation".
[to be continued...]
...just a few words about life, the universe, and research on topics related to the semantic web
Friday, June 27, 2008
Thursday, June 26, 2008
Adaptive Multimedia Retrieval 2008 in Berlin, June 26-27, 2008
The next two days, we are attending the Berlin Adaptve Multimedia Retrieval 2008 Workshop at the Heinrich Hertz Institute being located in downtown Berlin. So, it's pretty close to home and the only travelling involved was by S-Bahn :)
The first speaker is Francois Pachett from Sony CSL giving a keynote entitled "What are our audio features worth?"
The fundamental questions are "What makes objects what they are?", ""What are the features of subjectivity?", "How do we perceive objects and how can we transfer this to a machine?" Pachet's research is concerned with the classification of musical objects based on the so called polyphonic timbre that describes the sum of all features of a music object. Interesting thing is the identification of hubs, i.e. songs that are pretty close to every other song. Hubs in general seem to be mere artefacts of static models.
Interesting fact ist that there are companies now, predicting if your song is going to be a hit. Their judgement also relies on feature analysis and they even give recommendations how your song can be improvent to become a hit. Of course you have to pay for that service...but does it really work??
After the coffee break, there's a session on User-Adaptive Music Retrieval. The first talak is presented by Kay Wolter from Fraunhofer IDMT Ilmenau on "Adaptive User-Modelling for Content-Based Music Retrieval". They are adapting a content-based music retrieval system (CBMR) according to user preferences that are determined by acceptances and rejections of recommended songs by the user, which is furthermore used to improve the quality of music recommendations....Reminds me somehow to Pandora or last.fm...
The second talk is presented by Sebastian Stober from Otto-von-Guericke-Universität Magdeburg on "Towards User-Adaptive Structuring and Organization of Music Collections". So, wouldn't it be nice to structure your music collection automatically...but not in the way the software tells you, but the way you like it? The presented system is based on an general adaption approach using self-organizing maps that can be adapted by user interaction.
The first afternoon session is on "User-adaptive Web Retrieval" and starts with a presentation of Florian König from Johannes-Kepler-Universität Linz on "Using thematic ontologies for user- and group-based adaptive personalization in web searching". He introduces Prospector, which is a generic meta-search layer for Google, not constrained only to web search, based on re-ranking of search results and deploying user modells based on Open Directory Project (ODP) taxonomies. As far as I have understood, the applcation is based on the carrot2 framework for open source search engine result clustering.
Next, David Zellhöfer from BTU Cottbus presents on "A Poset Based Approach for Condition Weighting". Similarity search can be determined according to different conditions w.r.t. the search query. Esp. different people have different expectations if it comes to similarity. So, condition weights have to be determined by psychological experiments.
The second afternoon session is about "Music Tracking and Tumbnailing" and starts with a presentation of Tim Pohle from Johannes-Kepler-Universität Linz on "An Approach to Automatically Tracking Music Preference on Mobile Players". Ok, so the basic problem is, someday you will get bored by the music selection on your ipod. Therefore, the goal is to remove songs that you don't like anymore and replace them with new songs that you probably will like. How do you achieve this? Well, with according user feedback, i.e. by tracking the user's decision on choosing or skipping tracks. Tracks that have recently been skipped often will be dropped and replaced by tracks that are similar (according to some feature analyses) to the remaning tracks.
Next, Björn Schuller from Technische Universität Münschen is presenting on "One Day in Half an Hour: Music Thumbnailing Incorporating Harmony- and Rythm Structure". Music thumbnailing is some really cool feature, Just imagine, your sitting in your car and you are looking for another track to hear, but your player always starts songs at the beginning and they have long and boring intros. Therefore, getting to the most interesting (or significant) part of the song immediately would really be something...
The sessions close with an invited talk given by Stefan Weinzierl and Sascha Spors on "The Future of Audio Reproduction. Technology - Formats - Applications". Promissing title, let's see.... We start with a brief history of audio recording and reproduction technology starting from the very first phonograph to modern multichannel spatial surround sound systems. So, the future seems to be real sound field synthesis (wavefield synthesis, WFS) instead of relying on psycho-acustic effects as in today's stereo. Here, an array of loudspeakers reproduces exactly the wave front of the original sound source. For transmitting signals like this, no single channels are recorded anymore, but the original sound signal (without spatial characteristics of the room where it has been recorded, because this would interfere with the characteristics of the room, where it is reproduced) including movement and position of the sound source. Besides existing VRML and MPEG-4 Audio BIFS that focus more on visual scene description than on audio scene descriptions, there is the proposal of a new modeling language for high resolution spatial sound events called ASDF (Audio Scene Description Format).
[...to be continued in Adaptive Multimedia Retrieval 2008 in Berlin, June 26-27, 2008 - Day 02]
The first speaker is Francois Pachett from Sony CSL giving a keynote entitled "What are our audio features worth?"
The fundamental questions are "What makes objects what they are?", ""What are the features of subjectivity?", "How do we perceive objects and how can we transfer this to a machine?" Pachet's research is concerned with the classification of musical objects based on the so called polyphonic timbre that describes the sum of all features of a music object. Interesting thing is the identification of hubs, i.e. songs that are pretty close to every other song. Hubs in general seem to be mere artefacts of static models.
Interesting fact ist that there are companies now, predicting if your song is going to be a hit. Their judgement also relies on feature analysis and they even give recommendations how your song can be improvent to become a hit. Of course you have to pay for that service...but does it really work??
After the coffee break, there's a session on User-Adaptive Music Retrieval. The first talak is presented by Kay Wolter from Fraunhofer IDMT Ilmenau on "Adaptive User-Modelling for Content-Based Music Retrieval". They are adapting a content-based music retrieval system (CBMR) according to user preferences that are determined by acceptances and rejections of recommended songs by the user, which is furthermore used to improve the quality of music recommendations....Reminds me somehow to Pandora or last.fm...
The second talk is presented by Sebastian Stober from Otto-von-Guericke-Universität Magdeburg on "Towards User-Adaptive Structuring and Organization of Music Collections". So, wouldn't it be nice to structure your music collection automatically...but not in the way the software tells you, but the way you like it? The presented system is based on an general adaption approach using self-organizing maps that can be adapted by user interaction.
The first afternoon session is on "User-adaptive Web Retrieval" and starts with a presentation of Florian König from Johannes-Kepler-Universität Linz on "Using thematic ontologies for user- and group-based adaptive personalization in web searching". He introduces Prospector, which is a generic meta-search layer for Google, not constrained only to web search, based on re-ranking of search results and deploying user modells based on Open Directory Project (ODP) taxonomies. As far as I have understood, the applcation is based on the carrot2 framework for open source search engine result clustering.
Next, David Zellhöfer from BTU Cottbus presents on "A Poset Based Approach for Condition Weighting". Similarity search can be determined according to different conditions w.r.t. the search query. Esp. different people have different expectations if it comes to similarity. So, condition weights have to be determined by psychological experiments.
The second afternoon session is about "Music Tracking and Tumbnailing" and starts with a presentation of Tim Pohle from Johannes-Kepler-Universität Linz on "An Approach to Automatically Tracking Music Preference on Mobile Players". Ok, so the basic problem is, someday you will get bored by the music selection on your ipod. Therefore, the goal is to remove songs that you don't like anymore and replace them with new songs that you probably will like. How do you achieve this? Well, with according user feedback, i.e. by tracking the user's decision on choosing or skipping tracks. Tracks that have recently been skipped often will be dropped and replaced by tracks that are similar (according to some feature analyses) to the remaning tracks.
Next, Björn Schuller from Technische Universität Münschen is presenting on "One Day in Half an Hour: Music Thumbnailing Incorporating Harmony- and Rythm Structure". Music thumbnailing is some really cool feature, Just imagine, your sitting in your car and you are looking for another track to hear, but your player always starts songs at the beginning and they have long and boring intros. Therefore, getting to the most interesting (or significant) part of the song immediately would really be something...
The sessions close with an invited talk given by Stefan Weinzierl and Sascha Spors on "The Future of Audio Reproduction. Technology - Formats - Applications". Promissing title, let's see.... We start with a brief history of audio recording and reproduction technology starting from the very first phonograph to modern multichannel spatial surround sound systems. So, the future seems to be real sound field synthesis (wavefield synthesis, WFS) instead of relying on psycho-acustic effects as in today's stereo. Here, an array of loudspeakers reproduces exactly the wave front of the original sound source. For transmitting signals like this, no single channels are recorded anymore, but the original sound signal (without spatial characteristics of the room where it has been recorded, because this would interfere with the characteristics of the room, where it is reproduced) including movement and position of the sound source. Besides existing VRML and MPEG-4 Audio BIFS that focus more on visual scene description than on audio scene descriptions, there is the proposal of a new modeling language for high resolution spatial sound events called ASDF (Audio Scene Description Format).
[...to be continued in Adaptive Multimedia Retrieval 2008 in Berlin, June 26-27, 2008 - Day 02]
Wednesday, June 25, 2008
Visualization of large document data sets
Of course, there are a lot of books at amazon. To find a specific book, you have several possibilities. Either you try the 'search'-frame (be careful to write your keywords in the correct way as they do appear e.g. in the title) or you try to follow the categories and the proposed selections (Top10 lists, etc.) of books there. A great feature of course is the similarity based search or the search based on recommendations. This is the only way to discover something new by serendipity, something you did'nt even know to exist . In real life this way to find a book comes equal to finding by recommendation of friends or by your local librarian or book seller.
But, what about good old window-shopping. If I come to a bookstore or to a library, I love to wander around the book shelves and to look here and there and to spend (sometimes to waist...) lots of time.
Something that comes visually close to this experience is zoomii.com. It's an amazon add-on for browsing (a considerably large set of) books (they say its about 20.000 books right now) with the look and feel of book shelves, ordered by categories, with (right sized) book covers in the shelves. You can walk around, zoom in and out, and of course you can browse by categorie or search via keyword.
It's really some nice way to visualize large sets of documents. I would like to see an API for visualizing documents on the web that way!!
[via netbib.log]
But, what about good old window-shopping. If I come to a bookstore or to a library, I love to wander around the book shelves and to look here and there and to spend (sometimes to waist...) lots of time.
Something that comes visually close to this experience is zoomii.com. It's an amazon add-on for browsing (a considerably large set of) books (they say its about 20.000 books right now) with the look and feel of book shelves, ordered by categories, with (right sized) book covers in the shelves. You can walk around, zoom in and out, and of course you can browse by categorie or search via keyword.
It's really some nice way to visualize large sets of documents. I would like to see an API for visualizing documents on the web that way!!
[via netbib.log]
Thursday, June 19, 2008
Smithsonian's Photographic Archive at flickr!
The Smithsonian's photographic Archive (at least parts of it) is available at flickr! The pictures are in high-resolution und published under Flickr commons copyright regulations, i.e. copyright-free.
Most interesting is the Folder 'Portraits of Scientists and Inventors' including many famous scientists from the 19th century (including G. Marconi from the upper left corner...). The pictures are from the Smithsonian Dibner Library of the History of Science and Technology that has a collection of more than a thousand portraits of scientists and inventors through the centuries. Only a small sampling of 144 pictures of the collection is available at flickr and gives you an idea of the range of the collection. Visit “Scientific Identity: Portraits from the Dibner Library” to see the entire collection.
Other interesting folders include 'Portraits of Artists' or 'People and the Post' from the Smithsonian's National Post Museum.
[via boing boing]
Most interesting is the Folder 'Portraits of Scientists and Inventors' including many famous scientists from the 19th century (including G. Marconi from the upper left corner...). The pictures are from the Smithsonian Dibner Library of the History of Science and Technology that has a collection of more than a thousand portraits of scientists and inventors through the centuries. Only a small sampling of 144 pictures of the collection is available at flickr and gives you an idea of the range of the collection. Visit “Scientific Identity: Portraits from the Dibner Library” to see the entire collection.
Other interesting folders include 'Portraits of Artists' or 'People and the Post' from the Smithsonian's National Post Museum.
[via boing boing]
Tuesday, June 17, 2008
to boldly go....
For sure, there are a lot of nice little visualisation tools for flickr and even more has been written about tagging, folksonomies, and web 2.0 search (including the 'flickr!' article in this blog...). Nevertheless, I found a new flash-based application for flickr tag / search visualisation that is worth while taking a look: The Tag Galaxy.
Tag Galaxy visualises the search process in a bold star trek manner. Your search keywords (tags) are shown in a central star/planet with related tags spinning around it. you can refine your search by clicking on one of the small tag planets. This is only a one step ahead search...why not showing related tags of related tags....with according interrelationships...? Sounds weird? Should be worth a shot. Clicking on the central star/planet guides you to another view, where the search results (images) are shown on an animated globe. Really neat....and interesting what comes out if people have enough time to play around with flash ;-)
Anyway, the trouble with neat search visualisations often is that the visualisation is nice to look at but it gets boring after a few tries. Real good visualisation gives you more information than without the visualisation. Only if its worth while, you will keep on using it. Just think of Google Maps. Your search related with geographical information always improves your search results...and thus you will also use it next time....
Tag Galaxy visualises the search process in a bold star trek manner. Your search keywords (tags) are shown in a central star/planet with related tags spinning around it. you can refine your search by clicking on one of the small tag planets. This is only a one step ahead search...why not showing related tags of related tags....with according interrelationships...? Sounds weird? Should be worth a shot. Clicking on the central star/planet guides you to another view, where the search results (images) are shown on an animated globe. Really neat....and interesting what comes out if people have enough time to play around with flash ;-)
Anyway, the trouble with neat search visualisations often is that the visualisation is nice to look at but it gets boring after a few tries. Real good visualisation gives you more information than without the visualisation. Only if its worth while, you will keep on using it. Just think of Google Maps. Your search related with geographical information always improves your search results...and thus you will also use it next time....
Tuesday, June 10, 2008
SPAM - Vermüllte Briefkästen und die Märkische Allgemeine
Kaum hatte ich mich in meinem letzten Artikel über das Medienecho hier am HPI ausgelassen, gibt es doch noch einen weiteren Artikel in der Märkischen Allgemeinen Zeitung, in dem ich zitiert werde. in "INFORMATIONSTECHNIK: Vermüllte Briefkästen
Die Flut sogenannter Spam-Mails kostet Zeit, Geld und Energie" von Ulrich Nettelstroth werde ich zitiert mit dem Aufruf, "Porto für E-Mails" einzuführen, um der tagtäglichen SPAM-Flut Herr zu werden. Eigentlich hatte ich erwartet, dass ein Aufschrei durch die Nation gehen würde und ich als kapitalistischer Handlanger der Globalisierungs-Aktivisten gebrandmarkt werden würde, da ich damit doch die Grundprinzipien des des ökosozial-anarchischen Internets "verrate". Aber keiner hat es gemerkt ;-)
Was steckt dahinter? Nein, es geht nicht darum, dem Verbraucher noch mehr Geld aus der Tasche zu ziehen. Stellen wir uns doch nur einmal vor, in den monatlichen Kosten unseres Internetzugangs wären 10 Cent als Portoäquivalent für ein Freikontingent von 3000 E-Mails mit enthalten. 99,99% aller "normalen" Nutzer fallen derzeit mit Sicherheit in diese Kategorie (entspricht das doch 100 versandten E-Mails pro Tag inklusive Sonn- und Feiertage, Urlaub, etc.). Keinem fällt es auf, niemand wird durch diesen Obulus vom Internet ferngehalten, da die üblichen Verbindungs- und Bereitstellungskosten jeglicher Art von Datenkommunikation ein Vielfaches dieses Betrages darstellen.
Die typischen SPAM-Versender aber, versenden Ihre Kauf-Aufrufe üblicherweise an 10 Millionen und mehr " Kunden" pro Mailing-Aktion. Hochgerechnet ergeben sich dabei so etwa 300 Euro für 10 Millionen versendete E-Mails. Ich denke, damit wäre man schon an der Rentabilitätsgrenze der SPAM-Versender angekommen. Wenn nicht, muss man den Betrag noch etwas nach oben "tunen". Heute - da der Versand von SPAM E-Mails nahezu kostenlos erfolgt, rentiert es sich für den SPAM-Versender bereits, wenn pro Mailingaktion 100 Euro hereinkommen......Tun sie das? Ich weiss nicht, ob es hierüber verlässliche Studien gibt (falls jemand dazu eine Quelle bekannt sein sollte, ich wäre für Hinweise dankbar!).
Allerdings verlangt ein Porto für E-Mails auch ein entsprechend manipulationssicheres Accounting-Verfahren, d.h. mehr Aufwand und mehr Ressourceneinsatz wären notwendig. Wenn auf der anderen Seite aber der SPAM-Datenverkehr abnehmen würde, sinkt auch die aktuelle Belastung der Datenkommunikations-Infrastruktur....und damit auch der zugehörige Energieverbrauch. Somit könnte das "Porto für E-Mails" auch für eine günstigere weltweite CO2-Bilanz sorgen ;-)
Die Flut sogenannter Spam-Mails kostet Zeit, Geld und Energie" von Ulrich Nettelstroth werde ich zitiert mit dem Aufruf, "Porto für E-Mails" einzuführen, um der tagtäglichen SPAM-Flut Herr zu werden. Eigentlich hatte ich erwartet, dass ein Aufschrei durch die Nation gehen würde und ich als kapitalistischer Handlanger der Globalisierungs-Aktivisten gebrandmarkt werden würde, da ich damit doch die Grundprinzipien des des ökosozial-anarchischen Internets "verrate". Aber keiner hat es gemerkt ;-)
Was steckt dahinter? Nein, es geht nicht darum, dem Verbraucher noch mehr Geld aus der Tasche zu ziehen. Stellen wir uns doch nur einmal vor, in den monatlichen Kosten unseres Internetzugangs wären 10 Cent als Portoäquivalent für ein Freikontingent von 3000 E-Mails mit enthalten. 99,99% aller "normalen" Nutzer fallen derzeit mit Sicherheit in diese Kategorie (entspricht das doch 100 versandten E-Mails pro Tag inklusive Sonn- und Feiertage, Urlaub, etc.). Keinem fällt es auf, niemand wird durch diesen Obulus vom Internet ferngehalten, da die üblichen Verbindungs- und Bereitstellungskosten jeglicher Art von Datenkommunikation ein Vielfaches dieses Betrages darstellen.
Die typischen SPAM-Versender aber, versenden Ihre Kauf-Aufrufe üblicherweise an 10 Millionen und mehr " Kunden" pro Mailing-Aktion. Hochgerechnet ergeben sich dabei so etwa 300 Euro für 10 Millionen versendete E-Mails. Ich denke, damit wäre man schon an der Rentabilitätsgrenze der SPAM-Versender angekommen. Wenn nicht, muss man den Betrag noch etwas nach oben "tunen". Heute - da der Versand von SPAM E-Mails nahezu kostenlos erfolgt, rentiert es sich für den SPAM-Versender bereits, wenn pro Mailingaktion 100 Euro hereinkommen......Tun sie das? Ich weiss nicht, ob es hierüber verlässliche Studien gibt (falls jemand dazu eine Quelle bekannt sein sollte, ich wäre für Hinweise dankbar!).
Allerdings verlangt ein Porto für E-Mails auch ein entsprechend manipulationssicheres Accounting-Verfahren, d.h. mehr Aufwand und mehr Ressourceneinsatz wären notwendig. Wenn auf der anderen Seite aber der SPAM-Datenverkehr abnehmen würde, sinkt auch die aktuelle Belastung der Datenkommunikations-Infrastruktur....und damit auch der zugehörige Energieverbrauch. Somit könnte das "Porto für E-Mails" auch für eine günstigere weltweite CO2-Bilanz sorgen ;-)
Friday, June 06, 2008
In den Medien....
Das Medien-Echo, das mir entgegenschlägt, seit ich hier am Hasso Plattner Institut in Potsdam bin, ist schon etwas anderes als das in meinen Jenaer Zeiten. So auch der hier gefundene Artikel zu der gerade in Arbeit befindlichen Neuauflage unseres Buches "WWW - Kommunikation, Internetworking, Web-Technologien". Die Märkische Allgemeine Zeitung kündigte diese bereits in einem Artikel "Vom Homo sapiens zum Homo surfiens" am 29. Mai 2008 an.
Um also die Fan-Gemeinde auf dem Laufenden zu halten: Die Arbeiten gehen voran. Neue Kapitel, wie z.B. Suchmaschinen, und Web 2.0 sind bereits geschrieben, alle Kapitel werden sorgfältig überarbeitet und auf den neusten Stand gebracht (ich kämpfe gerade mit den Varianten der Videokomprimerung....) und weitere neue Kapitel, wie z.B. Semantic Web und SOA werden folgen. Eine Aussage über den geplanten Umfang der Neuauflage kann ich noch nicht geben, befürchte aber, wir werden die 1500-Seiten-Grenze erreichen und damit könnte es dann doch noch ein zweibändiges Werk werden....
Ch. Meinel, H. Sack: WWW: Kommunikation, Internetworking, Web-Technologien, Springer, Heidelberg, 2004.
Um also die Fan-Gemeinde auf dem Laufenden zu halten: Die Arbeiten gehen voran. Neue Kapitel, wie z.B. Suchmaschinen, und Web 2.0 sind bereits geschrieben, alle Kapitel werden sorgfältig überarbeitet und auf den neusten Stand gebracht (ich kämpfe gerade mit den Varianten der Videokomprimerung....) und weitere neue Kapitel, wie z.B. Semantic Web und SOA werden folgen. Eine Aussage über den geplanten Umfang der Neuauflage kann ich noch nicht geben, befürchte aber, wir werden die 1500-Seiten-Grenze erreichen und damit könnte es dann doch noch ein zweibändiges Werk werden....
Ch. Meinel, H. Sack: WWW: Kommunikation, Internetworking, Web-Technologien, Springer, Heidelberg, 2004.
Wednesday, June 04, 2008
Semantic Web Podcast Interview
Gestern wurde ich von meinem guten alten Bekannten und Kollegen Steffen Büffel im Rahmen des 3. Dresdner Future Forums zum Thema Semantic Web interviewt. Ich war zugegebenermaßen ein wenig unvorbereitet und via skype geführte Interviews klingen immer etwas hölzern (eher blechern...), aber immerhin, hier ist es nun, mein erstes Podcast-Interview....und natürlich meinen besten Dank an Steffen.
[Link zum Original-Artikel in media-ocean]
[Link zum Original-Artikel in media-ocean]
Subscribe to:
Posts (Atom)