Saturday, July 14, 2012

This blog has moved


An archive of this blog, which you can still comment on, is now located at http://blog1.vorburger.ch. You will be automatically redirected in 30 seconds or you may click here.

My New Blog (as of July 2012) is http://blog2.vorburger.ch.

Sunday, March 07, 2010

Pranav Mistry's SixthSense - on an Android-based wearable projector+webcam platform?

A visiting family friend of ours pointed me to Pranav Mistry's SixthSense. If you haven't seen this already, do watch the Videos - it's really Pretty Cool Stuff.

A few days later I still can't stop thinking about this. I've started pondering about how far out in the future this amazing concept may be (in a commercialized consumer available variant) - or how close actually? What you need to make this happen is, probably: 1. a "reasonably powerful" portable computing platform, 2. wearable projector, 3. wearable capturing thing (something which senses where your finger / foot whatever points to / moves around on the projected image), 4. software with touch interface, and more.

The 1. portable computing platform could be e.g. an Android-powered device. It occurred to me to Google for "android projector", and doing that you'll discover e.g. the upcoming Samsung Beam, a small Android-based device from Samsung with a built-in projector/beamer (using a "DLP pico projector", whatever that is). I doubt it has capturing of what it projects.

Pranav's "current prototype system" which "costs approximate $350 to build" seems to be using a standard off-the-shelve webcam (and some "magic" software, or is this easier to do than I realize?). Hook that up to an Android... isn't quite possible yet, from what I found in
Android's Issue 738 about "missing support for the USB host feature", but that's surely just a question of a little bit more time? (Or directly using an Android's device built-in camera? But wouldn't the projector and the cam have to be calibrated together so the capturing works?)

And then some smart software, certainly. A touch-based platform seems a good foundation?

So this could actually be made possible in a closer future than some may think.

Wouldn't this be... "neat", or what? I would love to be able to look more into this...

Thursday, November 26, 2009

JPA Id/Object References always, with Null Objects, a Pattern?

An object has references to other objects. A row in a relational database table has foreign keys (FK) to rows in other tables.

While this basic "impedence mismatch" between the model used in the relational datastores prevalent in the enterprise today (RDBMS) and an object model (OO) is well understood and addressed by today's Object Relational Mapping (ORM) technologies, there is a specific use case which (AFAIK) is typically not easily addressed by how today's ORMs are used: What if, sometimes, you need that FK directly?

Imagine, in pseudo code (you get the idea) : class A { long id; B b; ... } & class B { long id; ... }. In certain situations, it would be handy, and feel natural from the OO point of view, if you could just ask for a.b.id. Alas this typically leads to "lazy loading", a delayed access with a SELECT FROM b ... in an ORM. If you really just want the b.id, this deferred RDBMS access makes no sense of course.

The "problem" why normally this doesn't work when using e.g. a Java Persistence API (JPA) implementation ORM is that relation fields in entity instances returned by a query (or an EM's "get one" find() method) are normally either fully initialized because the field was actually annotated as FetchType.EAGER, or because a JOIN FETCH in a JPQL Query (standardized by the specification, not implementation specific) requested it to be, or due to some JPA implementation specific API such as the OpenJPA FetchPlan API which asked for that (all of which leads to table JOINs and/or additional SELECT queries), or null if none of that is used.

The JPA specification must have had use cases like these in mind, and offers a concept of interest in this context, namely the probably less well known T getReference(Class<T> entityClass, Object primaryKey) method of an EntityManager.

Furthermore, it turns out that e.g. B's id is in fact typically already available to the ORM internally after it read an A, even if the A's b field is still null. This makes sense, and is how the lazy loading stuff normally works behind the scenes in all ORMs AFAIK (that's how they do one and not two SELECT statements when lazy loading).

It occurred to me that it would be really handy if all relationship fields of an Entity were always initialized, either with a full blown real object if the field really was eagerly fetched, or with whatever kind of "hollow" object getReference() created efficiently (typically only the object's id fields composing the oid). This would make the a.b.id example from above work for both the "eager" and the "lazy" scenario homogeneously, efficiently & very naturally!

With a little bit of unfortunately unavoidable hacking to access one JPA implementation's internals (OpenJPA in my example), needed because the JPA public API (both v1 and v2) does not allow direct access to that internal A's B id, the loaded state, and a getReference() from within a @PostLoad without access to an EntityManager, this idea does work indeed, as demonstrated in my example project's test case. (Other JPA implementations likely would allow similar direct access to its data structures? The only thing in the example that would need to be "ported" to another ORM such as e.g. EclipseLink and Hibernate is factored in the JPAHelper class... if anybody is interested in trying this out?)

One interesting side effect I ran into while looking at this and trying to get a running sample was the case of e.g. A's b really having to be null - because the say b_id FK in the A table IS actually NULL in the DB (if it's optional / nullable). I thought it would be good if you could STILL do a.b.id (always), with that expression (access path) simply returning null (or 0 if the id field is of an e.g. primitive int or long type instead of a Integer or Long object) in that case, but never causing a NullPointerException.

The initial inspiration for that was how the new Scala programming language appears to (normally? from what I understood so far; I'm only half way through my Scala book!) prevent NullPointerExceptions altogether. Then a good colleague pointed out that conceptually this is of course nothing new, and not specific to Scala - it's the Null Object pattern at work. So I threw in a bit of Null JPA Entity objects, and the interplay of my Reference object idea above with the application of the null object pattern here seems really neat.

Download the "JPA Id/Object References always, with Null Objects" example project to have a closer look at running code demonstrating this idea. - Do you like this approach? Is this a "pattern"? Could & should future JPA specification standardize support for such a usage?

Acknowledgments: Thanks to Yann Andenmatten for always inspiring feedback & discussions, and the dynamic/runtime AOP-ish NullEntityFactory contribution to the example project.

Wednesday, October 21, 2009

Customer Loyalty (KISSfp)

Recently I received one of the more fascinating emails in quite a while. Quite an experience, really. With Bill's permission, here it is:
Hi, Mike,

Sorry to bother you at this email, but I keep getting rejection notices from sending this to your KISSfp email address.

I have a dying PC with KISSfp installed on it and I need to reinstall it on another computer.

Unfortunately, the only copy of an install program I have is the trial version (2MB). Can I download the live version somewhere? The link from my original order confirmation is no longer valid.

Thanks!
Bill

Here's a copy of my order confirmation:

> Subject: Order #1234567 Purchase Information
> Date: 23 Sep 2000
>
> Your order has been approved - thank you for your purchase!

> Order SUMMARY
> =============
> Order #:          1234567
> Purchase Date:    9/23/00
> Purchase Amount:  USD 49.95
> Last Name:        XXX
> Nighttime Phone:  123.123.1234
>
>   (1)   KISSfp Microsoft FrontPage FTP Add-On - Business Edition for Commercial Webs
Now, if you don't understand why "fascinating experience", just check out the dates... Bill purchased a KISSfp license in 2000, and is contacting me today (October 2009) asking for support - 9 years on! Quite a lesson in Customer Loyalty, isn't it? Nine years is an eternity... since I created my KISSfp "Microsoft FrontPage FTP Add-On" tool. So many things have happened since: I have moved from Switzerland to Italy to Switzerland to California/USA to Switzerland... got married to an amazing wife... had two wonderful kids... wrote a book... not to speak about four great day jobs, .. and much more of course. Man. Man!

As I just got quite nostalgic, I dug into my "rummaging old, moldy files" (Bill in our email exchange following his request), and here is the full disclosure history of KISSfp: It all started in April 1998 (the pure C/CLI version), with some work done during my exchange year in Torino (thank you, Massimo, and Daniele!). Around Summer/September 1998, I appear to have started with the Borland C++ Builder-based GUI, coding away at least partly during the nights while interning during the day at the IPB in Geneva (thank you Jo & Adam for proof-reading the help!), with what eventually became the 2.0 (initially "professionally" packaged & released in February 1999; see also KISSfp Version History). Further developments during the summer of 1999, and if I recall right from then on more time spent on stuff like online marketing, the whole Digital River / NetSales story, my referral program, the MenuPlus partnership with Jeff (where are you, Jeff?), the inclusion in the Microsoft Office Update Vendor Program in April/May 1999 (yeah!), at least one FrontPage book ("Master Visually FrontPage 2002") but probably others with CDs including KISSfp, that PreviewSystems VBox thing (now apparently Aladdin.com HASP; how ridiculous all that seems in retrospect from today's day and age where I live daily using open source Java enterprise components), with increasingly fairly intensive daily technical support (thank you, Divvya!), well into 2000... and ultimately priorities in life starting to shift significantly about 2001 I think, with rapidly less time available for KISSfp.

For Bill and anybody else who may still be out there, I have finally made available a free/non-VBoxed/unlocked version of KISSfp publicly ... To use it, first run the KISSFP21.EXE which is that classic setup installer, and then run kissfp20-pvtkey.exe or kissfp20-buskey.exe once and it will be "registered" - you'll have the same app that you could purchase in the those golden days, and it seems to still run just fine on XP today! - PS: The installer EXE doesn't seem to quit and hand around (I think it always was like that), so you should may be log out and re-login to be clean.

Hey, ya all out there, loyal KISSfp users (if there are any others than Bill left?!), starting with the enthusiastic early ones and the at least many hundreds if not (probably) few thousands total along the road - it was a great time! Thank you.

KISSfp, forever!

Take care.

Tuesday, October 20, 2009

Privacy, Firefox Geolocation, Google Location Service

I vaguely knew there was something like a W3C Geolocation API, had earlier read about the Google Gears Geolocation API, figured Yahoo had some Geo Technologies.

I had heard this works on recent mobile devices with built-in GPS, or exploiting cellphone network antenna/network triangulation, with the built-in browsers hooking into the mobile OS and exposing this information, but had always assumed that at home on a lapotp with a classic browser this wouldn't apply - how could it?

Then I stumbled upon the Geolocation support built-into Firefox 3.5, and unsuspectingly clicked "Give it a try!" on the Mozilla.com test page, and... WTF, HOW DO THEY KNOW MY EXACT ADDRESS?? I live on a small street, and just looking at the map there is no doubt that "they" know the exact street - not just the area. (They being Google here, as "Firefox gathers information about nearby wireless access points and your computer’s IP address. Then Firefox sends this information to the default geolocation service provider, Google Location Services, to get an estimate of your location.")

Now, IP-based Geolocation is old news, you could figure out "geolocation" years ago by looking at the DNS names of the router hops shown by a traceroute, but unless my ISP in Switzerland shares details about their network topology with Google, how did this now get to street-level granularity?!

I know more recently there is this WiFi and cellphone tower triangulation stuff, but unless I'm totally not getting it, Firefox could only know my home WiFi SSID, so what? Or I guess may be it can ask the OS for the names of all access points currently being picked up, but still, it's a residential area, they're just neighbors, "they" couldn't have geolocation data on all of them?! And even so, WiFi SSIDs aren't exactly GUIDs..

Now generally speaking I am not a privacy maniac (e.g. I didn't quite "get" the surprising reactions in Switzerland when Google Street View came online recently; that's all already in public anyways!), but here I got... I don't know. Yeah yeah, Firefox respects my privacy and there is this toolbar thingie asking every time if I really do want to share my location... but isn't it still a bit... you know, scary?

PS: Curiously, the German version of that same Mozilla page thinks I'm in "Zurich" (I'm actually about 200km away from it!), and once I visited that even the English page forgot what it first knew, and also said Zurich. But a browser restart and visiting the English page again returns its 007 insight I first noticed. For a moment I suspected that may be Google is simply exploiting my account cookie (which wouldn't be very "location" aware at all), but a test where I logged out of Google and then went to the Mozilla page showed that probably it's got nothing to do with that.

PPS: After having already posted the above, a test/idea occurred to me: I completely switched off the WiFi on the laptop that I'm trying/writing all this from, and got a good old ethernet cable out of the drawer and plugged that into the ADSL router at home. Interestingly, it thought I was in Zurich again! (I noticed you have to best restart the browser for such tests, but then it's definitely repeatable.) So apparently this IS based on WiFi names then (really just SSID names, or do "they" have any other more GUID-like info available??), not simple IP-based location tracking. So "somebody" presumably drove by here, detected/measured and mapped out my and our neighbours access points, and recorded all this in one fr%*#ing global DB?? This is crazy!

Monday, October 12, 2009

Links for kids, learning websites & software

Here is a list of some links to learning websites & software for kids which others may find useful, from my delicious.com bookmarks. Most I've them I have myself used with my son Dév:
And the following are some great (free) learning software to download, not websites:
Lastly here are some thing that look interesting but that we haven't looked much at yet, and which are probably of interest for slightly older kids than the above:

Monday, August 24, 2009

xText Standalone Setup Parsing a DSL from a String without EMF Resource

I have been playing with xText, a very interesting framework for development of textual domain specific languages (DSLs).

My interest is around an (existing, legacy) expression language, and twofold: a) build a strong Eclipse editor, b) build an interpreter for a non-Eclipse application runtime. XText is really great for the first. Here is how to use the xText generated parser in a "standalone" non-Eclipse application, based on their generated Google Guice setup infrastructure, without having to go through the EMF ECore Resource XtextResource:

String t = "...";
Injector guiceInjector = new MyDSLStandaloneSetup().createInjectorAndDoEMFRegistration();
IParser parser = guiceInjector.getInstance(IAntlrParser.class);
IParseResult result = parser.parse(new StringInputStream(t));
List<SyntaxError> errors = result.getParseErrors();
Assert.assertTrue(errors.size() == 0);
EObject eRoot = result.getRootASTElement();
MyDSLRoot root = (MyDSLRoot) eRoot;

Filed a minor item on https://bugs.eclipse.org/bugs/show_bug.cgi?id=287413 to may be have them include something like this in the official xText documentation.

PS: I am not entirely sure that StringInputStream properly always converts all (Unicode) characters into bytes for an InputStream? (But that's another topic... the whole StringBufferInputStream deprecated story, the StringReader alternative which I guess would require that xText can feed the the Antlr Lexer a Reader instead of an InputStream... could it? That org.eclipse.xtext.util.StringInputStream is just a String.getBytes(), which "uses the platform's default encoding", is that safe?)

Thursday, May 14, 2009

GetterMethodsReflectionToStringBuilder

Apache Commons Lang has a ToStringBuilder which is sometimes handy.

The other day I needed something like this which "reflects" on public getters() instead of on private fields, so I hacked one; I call it the GetterMethodsReflectionToStringBuilder.

Uploaded to https://issues.apache.org/jira/browse/LANG-503 as suggestion for Commons Lang 3.0.

Thursday, August 07, 2008

Testing OpenJPA SQL statements using a JDBCListener

I am currently designing something around FetchGroups in OpenJPA, and being a good Agile / Test Driven architect, I wanted to write a JUnit which asserts the actual SQL statement generated by this ORM (the purpose is to test the FROM/WHERE, the SQL should not include unncessary JOIN and columns, if that framework I'm building around FetchGroups works).

Initially I went off looking into e.g. good ol' P6spy (that's STILL around?). Then I mused around some (Spring?) AOP Interceptor for a JDBC Connection, this article on DW seemed a good starting point... But finally I realized that the following (OpenJPA specific; this is obviously not standard JPA) does the job perfectly (and may be useful to others, so posted here):
public class RememberingLastStatementJDBCListener extends AbstractJDBCListener {

private String lastSQL;

@Override
protected void eventOccurred(JDBCEvent event) {
if (event.getSQL() != null) {
// Note: This will be called several times, for different event.getType(); but it doesn't matter for this use.
this.lastSQL = event.getSQL();
}
}

public String getLastSQL() {
return this.lastSQL;
}

and then in the JUnit you can do like:

RememberingLastStatementJDBCListener rememberingLastStatementJDBCListener;

openJPAConnectionProperties.setProperty("openjpa.jdbc.JDBCListeners", RememberingLastStatementJDBCListener.class.getName());

OpenJPAEntityManager oem = OpenJPAPersistence.cast(em);
OpenJPAEntityManagerSPI oems = (OpenJPAEntityManagerSPI) oem;
OpenJPAConfiguration configuration = oems.getConfiguration();
JDBCConfiguration jdbcConfiguration = (JDBCConfiguration)configuration;
JDBCListener[] jdbcListeners = jdbcConfiguration.getJDBCListenerInstances();
for (JDBCListener listener : jdbcListeners) {
if (listener instanceof RememberingLastStatementJDBCListener) {
rememberingLastStatementJDBCListener = (RememberingLastStatementJDBCListener) listener;
}
}

SomeEntity e = new SomeEntity();
e.setName("Hallo World!");
em.persist(e);

// We *HAVE* to flush() at this point, otherwise the Listener won't (yet) have seen the Statement(s)
em.flush();

Assert.assertEquals("INSERT INTO APP.SOMEENTITY (ID, NAME) VALUES (?, ?)",
rememberingLastStatementJDBCListener.getLastSQL());

Don't you love Open Source frameworks, where you can dig into the code and figure this kind of stuff out? ;-)

Wednesday, February 06, 2008

liftConference Venture Night

I am at the Venture Night of the lift conference in Geneva this evening. It's an interesting change from the sort of things I think about in my current day job. Some thoughts about the companies that are presenting their startups on stage while I write this:
  • cocomment, UK: Bof. Some negative comments from audience and panel. Claims to be "leading web commenting platform". Can stuff that requires an IE/FF plug-in really make it truly big? Doesn't it overlap too much with existing blogging platforms?
  • Holistis: Track-your-sites-visitors. Haven't we seen this before? But, uhm - I came late, didn't look closely enough. Maybe interesting... I probably missed where the innovation lies.
  • iO naturalinteraction.org, Italy: Something else, not yet another Web 2.0. Very cool video of like a kiosk for a shopping mall where you can look at e.g. product catalog like in ... that SciFi movie, with the police that can see the future? Where they move stuff with fingers on a table. Looks neat. Wasn't Microsoft showing off stuff very much like this about a year or so ago? It somehow looks very familiar. Maybe it's actually the same.
  • Mixin, Switzerland: Calendaring for Web2.0... match up free slots with your friends for fixing up time for a movie or something. People mention doppler and twitter. I wouldn't use it - but I don't use (nor at a personal level really "get") e.g. Facebook either... Still: Thumbs up!
  • Pixelux, Geneva: Innovative gaming graphics engine, something about new algorithms for simulating material physics or something. More f*# stupid war games. Oh well. Will be used in next StarWars game. Made it!
  • Viewdle: Face recognition for Videos! Can I get this for Flickr and as an iPhoto plug-in, to tag family photos with who is on it? (I know it exists, saw some other Flickr-like online photo sharing thingie with face recognition stuff a while ago.) Kewl.
  • Wuala, Zurich: Interesting! Online disk stuff, with a P2P twist. From Zurich, out of ETHZ guys. Maybe I just like it because it actually responds to a need I currently have - I really need to start backing up my files somewhere online. Idea: You share local disk space for others against getting the same space from their P2P network. You seem to know what your doing, I liked the speed of the guy's presentation. Classical business model - advertising and paying power users. - "We want to become the Skype of online storage". Well, good luck, guys!
  • Clipperz, Italy: Nah. Mixes basically interesting underlying "encryption for Web2.0" technology (something like this should be available in lower-level toolkits?) with not-so-exciting "password manager for all your Web acounts" - tried before, too niche.

Saturday, June 23, 2007

Google Maps Street View

I had heard about the new "Street View" in Google Maps earlier but just tried it for real tonight...

To "walk" from my previous life office on 11 John Street over to Ground Zero and Battery Park, then zoom over to the west coast and drive around Sunnyvale (less pics) and Stanford's Palm drive -- it's freak' cool! Try it.

Now let's mash this up with SecondLife somehow... (BTW, off-topic but still: OpenCroquet and maybe more so lg3d-wonderland are kind of interesting; it's not about SL, it's the big picture behind of what is emerging with these...)

PS, added after initial post of the above: Second Earth...

Saturday, December 30, 2006

JUnit in Ant Classpath Hell

Running JUnit tests from an Ant build script - piece of cake? Welcome to Classpath Hell. Read the Ant FAQ for a taste of what to expect... (NOTE: Ant 1.7, which just came up, finally "solves" this, but most project are on Ant v1.6.5 not 1.7 yet.)

Essentially, you have to either add the junit.jar to your global CLASSPATH system environment variable (very bad) or copy junit.jar to your %ANT_HOME%/lib or remove ant-junit.jar from %ANT_HOME%/lib (and have both ant-junit.jar and junit.jar in your project and then use an ant taskdef with classpath) - bad too.

The best I could think of to make this less painful is at least have a build script semi-transparently take care of this, by itself automatically copying a junit.jar to your %HOME%/.ant/lib (slightly better than %ANT_HOME%/lib probably) if it is not there yet... not perfect, as the build of a project thus "pollutes" a globally installed application - but apparently the best you can do?

Such an ant build script would look like this:

<target name="copyJUnitToAntClasspathIfNeeded" unless="isJUnitInAntClasspath">
<fail unless="ant.library.dir" message="The ant variable ant.library.dir is not available... that's weired. Please manually copy lib/junit.jar into the lib directory of where you installed ant">

<copy file="lib/junit-3.8.1.jar" todir="${user.home}/.ant/lib" verbose="true" /> <!-- Or ${ant.library.dir}, but user.home is probably better? -->

<echo message="JUnit.jar had to be copied into the lib directory of your ant" />
<echo message="installation. Please manually restart the build now." />
<echo message="(There is unfortunately no cleaner way to do this prior to Ant 1.7; see: " />
<echo message="http://ant.apache.org/faq.html#delegating-classloader)" />
<fail message="Please just launch this build once again, this is a one-time only behaviour." />
</target>

<target name="testIfJUnitInAntClasspath">
<available property="isJUnitInAntClasspath" classname="junit.framework.Test" />
<antcall target="copyJUnitToAntClasspathIfNeeded"/>
</target>

<target name="test" depends="testIfJUnitInAntClasspath">
<junit ... />

Labels:

Tuesday, October 31, 2006

Propagating Acegi's Security Context in a WSS UsernameToken SOAP Header via XFire using wss4j

XFire includes a ws-security example, which demonstrates how a SOAP Header with a Web Service Security (WSS) UsernameToken can be inserted into an outgoing request message, using Apache's wss4j library. (The XFire ws-security example also demonstrates how to sign and/or encrypt a SOAP message, and how to use the WSS timestamp mechanism.)

I needed this functionality to be available more easily for Java service consumers, in the sense of an implicit security context passing transparent to the programmer when invoking a service. As Java API to set up a security context, Acegi from the Spring Framework should be used.

Maybe first some brief background on WSS: A WSS UsernameToken is really just a username and password, in principle similar to HTTP BASIC authentication. However HTTP BASIC delivers the credential at the transport level, whereas a WSS Header has the advantage of propagating a credential at the (SOAP) message level, thus allowing it to travel forward through several intermediaries. For example, a message could move from a service consumer to a XML security gateway, to an ESB and then into a message queue, out of which it would go into another ESB, which would route it to a service provider. With transport-level instead of message level propagation each intermediary would have to ensure forwarding, or out-of-message storage (queue) of the credential.

The WSS standard standardizes the SOAP headers used for such message level credentials; in addition to a UsernameToken, it could also be a X.509 certificate or a Kerberos ticket or a SAML token.

As a UsernameToken contains a cleartext password, the message would typically be protected either through transport level point-to-point encryption via SSL, just as would be advisable when using HTTP BASIC. Additionally, the SOAP message could also be protected through message level XML encryption.

So, back Java/Acegi/XFire: What I really wanted was to be able to do the following at the service consumer (client):

import org.acegisecurity.context.SecurityContextHolder;
import org.acegisecurity.providers.UsernamePasswordAuthenticationToken;

SecurityContextHolder.getContext().setAuthentication(
new UsernamePasswordAuthenticationToken("uid", "pwd"));

myServiceStub.myOperation(myRequest);

SecurityContextHolder.clearContext();

and have the uid/pwd end up "transparently" in the outgoing message produced by XFire. The actual service provider (server) might or might not receive that header, e.g. if an intermediary (actual SOAP intermediary or just an in-process incoming message filter) authenticated and authorized the message, potentially stripping the WSS header. If however it did receive the SOAP header, it might have a need to get the credential back, again easily and using the same Acegi API, without having to deal with SOAP headers etc. directly, by doing:

Authentication auth = SecurityContextHolder.getContext().getAuthentication();
assert auth != null; // Is null if no WSS Header present in SOAP message!
String uid = auth.getName();
String pwd = auth.getCredentials();

The acegi-ws-security-xfire-example contains a working example implementing just that. (I built it by extending the XFire jaxws-spring example I have contributed earlier to the project; but the Acegi/WSS integration itself is unrelated to that, and the Handler etc. infrastructure should be easily reusable in other uses of XFire, e.g. POJO without WSDL).

Sunday, October 15, 2006

SWSDL - Simple WSDL format, and my swsdl2wsdl

Hand writing WSDL (Web Service Description Language, stop reading here if that doesn't mean much to you?) is a pain. The GUIs I have seen (whether XML Spy or Eclipse WTP or...) don't really change the fact that typing (or clicking, same thing) type and message and then a portType and then binding again and then finally a service... jeez!

Let's not go into the Contract First (writing WSDL and XSD) versus Code First (e.g. Java first and then generating WSDL and XSD from it) debate here. Let's assume you think doing SOA with Contract First makes sense, describing a service interface in XML Schema is sensible, but what bothers you too is having to write that WSDL, because you are lazy, and like me you think that might be a Good Thing.

Now, of course, there is WSDL 2.0 which promises some (!) simplification on the horizon, or maybe not (on the horizon even, or simplification)... whatever, it's not here today. Other ideas and tools which float around include e.g. WSCF (and jWSCF), but all I really wanted is a simple, platform/language-neutral form to describe a "service", referencing an XSD file for the Schema -like how hard is that? If "extensibility" is really needed, that "form" could allow extensions, but the normal most common use case should be simple.

So I designed SWSDL - Simple WSDL, a swsdl.xsd and a little converter (swsdl2wsdl) that generates WSDL 1.1 from my SWSDL. Here is a page where I compare hand-written SWSDL (Simple-WSDL) and the auto-generated standard WSDL by wsdl2swdl. Interested? Download swsdl-distribution-0.5.zip and give it a shot!

If you find this useful, let me know (post a comment on the blog) what you think. I might put this thingie on Sourceforge or Java.net or somewhere. For now, the src is in the swsdl-project-0.5.zip, but do try the swsdl-distribution-0.5.zip first as it has an example and launch script.

Tuesday, October 10, 2006

soapUI

If you develop SOAP-based Web Services, there is a tool that you will not want to miss once you have tried it. It's soapUI, an open-source GUI for easily and very quickly creating and firing off SOAP requests. (So basically, you feed it WSDL, it creates sample SOAP requests for you, you edit those in a simple but functional XML editor, and Send it to see the response.)

This is one of the advantages of an SOA using XML as platform neutral lingua franca of message exchange for integration purposes: A machine readable representation (XML schema and WSDL) to describe services and make them discoverable and easily invokable by tools, such as soapUI. And no JAR files like for RMI services, no IDL compilers... it's a data format, there is a commonly agreed description of it, and tools can construct and consume messages - easily, and even "simple" tools.

Even technical business analysts can perform functional testing of services, as suggested and successfully happened in one project I was coaching. (SOAPui also has performance and stress testing features, I haven't actually used those yet, but recently recommend it to somebody who was looking for a tool to model such test scenarios on a few web services.)

Of course, some commercial tools offer similiar features (e.g. XMLSpy, Mindreef SOAPscope; probably others) but soapUI is free, simple - and works really well, something is just "right" about it.

BTW: SOAPui just got better (what will probably end up being v1.7), with added support for automatic forced validation of request/response messages in editors - an additional new useful SOAPui feature suggested and discussed with the SOAPui team by your humble author of this blog! ;-)

Friday, October 06, 2006

Machine translation progress

Machine translation seems to have made impressive progress in the last few years. Running e.g. a Wikipedia Article article through Google Language Tools or Systranbox.com (which produces the almost identical translation) actually creates a more than just rough translation (as I think they did maybe 5-6 years ago); it's not really "correct" of course (as a human written equivalent article), but... "reasonably understandable" definitely, I'd say.

Now, I always thought of MT as an end-user tool; one would consciously use it, knowing fully well that it's a machine translation. Apparently not everybody agrees though... For example, I came across an article on Microsoft MSDN in German, which at first glance got me thinking, wow, these guys translate MSDN content? Very kind. Then I read more carefully... and somehow it didn't sound right (we don't say "seit wir uns Ihnen in der ...-Kolumne MITGETEILT haben" -- do we??). Hey, wouldn't some statement like "This is machine translated content. We hope the content is of interest and use to you in this form. (And maybe also:) Click here to order a high-quality human translation of this page" - using some Service Façade to a translation services company.

Thursday, August 31, 2006

GMail offline - via Google Desktop

I just realized that you CAN read Gmail offline... via Google Desktop! (You have to enable that "Search Gmail [X] Index and search email in my Gmail account" thingie; if you do, full copies of all Gmail Messages appear to be stored locally.)

Friday, August 25, 2006

XFire Example: Contract First & Spring

From: Michael Vorburger
Subject: Proposed new XFire example: Use of wsgen/xfire (contract first), and Spring
To: user@xfire.codehaus.org


Hello,

The XFire distribution includes only one simple example of a "contract first" design (write WSDL/XSD and generate Java from it), the 'geoip-client'. There is also a simple 'spring' example in the current distribution.

I put together another, more complete such example, which runs wsgen with the JAXWSProfile. In addition, my example also ties the former together with Spring, showing both the XFireExporter as well as the XFireClientFactoryBean (which the 'spring' example does not).

ZIP is on http://www.vorburger.ch/blog1/xfire-examples-wsgen-jaxws-spring12.zip

Regards.

PS: If the ZIP does not work for you, you might have to follow the instructions that mvn prints, to locally install the the modules/xfire-jaxws-1.2-RC.jar found in the xfire-1.2-RC distribution, as that does not appear to be on the Codehaus repository today.

Tuesday, August 08, 2006

Electronic Passports - done all wrong?

Yesterday the morning newspaper carried a (very short) article about new Electronic Passports being hacked or something like that. I wanted to know what that was about, and some Googling revealed that the article must have been talking about a demonstration at the BlackHat conference, as e.g. mentioned on Bruce Schneier's blog.

After having read up on this, I am a bit perplexed - what are they actually trying to achieve with this electronic passport?? More "throughput" at border controls because of contactless machine reading? Nice, but the OCR readable text at the bottom of new passports should already achieve that, doesn't it? More "security" - like what? Just prevent changing data in a passport (what's the scenario)? Granted, that appears to be achieved by digitally signing the data in an RFID chip on the passport. But prevent forging passports? Not really... in fact, after having read up on the architecture, it appears terribly EASY to read the digital information (contactless, from a distance!!) in one passport and copy it into another (stolen or not) one, electronically not distinguishable. This is little better than the rechargable card for the washing machine!

If you do want to have a reliable system to track and prevent abuse, why not do it right, with a hardware PKI chip thing, that securely stores a private key (ideally generated on the chip HW itself; NOT generated externally and transferred into it)? Probably not impossible to read & copy from either, but certainly much much more secure, from what I understand. A HW chip like in the SmartCard I now to carry to work for one client, or like my ThinkPad laptop has built-in (BTW: I didn't get that working with Thunderbrid/Firefox; do I need a special PKCS#11 module or something - somebody knows where to find that??).

Or is there picture or some properly reliably biometric information in the RFID chip data that could used to match the person presenting the passport to the person it was originally meant to be issued to? That would be an idea I guess... would that work & help? Haven't read anything in that direction though.

Or is it a cost problem? I can't imagine a small proper HW crypto chip to be that expensive, certainly not if purchased in volumes as this would be about.

PS: The actual form factor, i.e. whether it really is a credit-card size SmartCard, or that kind of chip embedded in the cover of a plastic passport, seems like an orthogonal issue to me; although it may be interesting to note that many countries in Europe have credit-card size "identity cards" that we use to travel within Europe, instead of the full-blown passports. Equally orthogonal to the chip itself is the access technology - although I admit being equally or even more stunned on that aspect.. contactless and remotely readable... the practical reasons are (somewhat) plausible, but imagine the implications - what are they thinking?? If at least it had a big red on/off button or something like that! This is off course just the tip of "non-technical" aspect of this entire topic... I won't go into that here.

Setting up two-way (mutual) SSL with Tomcat on Java5 is easy!

I recently wanted to set up Tomcat for two-way (mutual) SSL. It turns out this is fairly easy, particularly with Java5 since it now provides writes, and not just reads, of PKCS#12 keystores directly, so you don't actually need to use openssl anymore, as most of the instructions I found online about this suggest. I needed this for a test of something, and wasn't interested in using a Certificate Authority (CA) etc. All I wanted was an easy set up with Tomcat (v5.5.15/17 used) for testing; other people were going to worry about CA and issuing procedures and all later. Here is how:

1) Create the key & cert for the Tomcat server:

%JAVA_HOME%\bin\keytool -genkey -v -alias tomcat -keyalg RSA -validity 3650 -keystore /path/to/my/tomcat.keystore -dname "CN=localhost, OU=MYOU, O=MYORG, L=MYCITY, ST=MYSTATE, C=MY" -storepass=password -keypass=password

Note that the storepass and keypass *HAS* to be the same here. The CN should be the host/machine name that will appear in the HTTPS URL when accessing this Tomcat, so e.g. localhost, or also *.domain.com I believe.

2) Enable the SSL connector in Tomcat's conf/server.xml:

<connector port="8443" maxhttpheadersize="8192" maxthreads="150" minsparethreads="25" maxsparethreads="75" enablelookups="false" disableuploadtimeout="true" acceptcount="100" scheme="https" secure="true" sslprotocol="TLS" clientauth="true" keystorefile="path/to/my/tomcat.keystore" keystorepass="password" truststorefile="path/to/my/tomcat.keystore" truststorepass="password">

In keystoreFile you can specify an absolute pathname, or a relative pathname that is resolved against the Tomcat root installation directory. You also have to specify the truststore. (The stores don't have to be in a user home directory.)

If all you wanted is well-known one-way SSL with no client certificate, you would have to change the above to clientAuth="false" and stop here, else do as above and keep reading for the interesting part.

3) Create the client (yours, or a machine's) key & cert:

keytool -genkey -v -alias vorburgerKey -keyalg RSA -storetype PKCS12 -keystore vorburger.p12 -dname "CN=Michael Vorburger NEU, OU=DerTest, O=DieOrg, L=Lausanne, ST=VD, C=CH" -storepass mypassword -keypass mypassword

This looks and is similar to 1) above, but we are storing this into a different keystore (because the server/Tomcat doesn't have your private key) of storetype PKCS12. On Windows, you can double-click and import this *.p12 file into IE, or add it to Firefox via Tools / Options, Security, Certificates, View Certificates, Import. You'll have to type in the mypassword (above). BTW, again the storepass and keypass *HAS* to be the same here, else Windows/IE or Mozilla Certificate importing will fail.

4) Now we need to add the certificate (containing the public key) to the Tomcat keystore so that it recognizes this client certificate, by first exporting it from the keystore from step 3 and then importing it into the keystore from step 1: (Typically you probably would not import each client cert but a CA root cert and then sign each client cert with that one, but as said, I am showing a simple setup for tests here.)

keytool -export -alias vorburgerCert -keystore vorburger.p12 -storetype PKCS12 -storepass mypassword -rfc -file vorburger.cer

keytool -import -v -file vorburger.cer -keystore tomcat.keystore -storepass password

On Windows, you can double-click and see the vorburger.cer. You may also would like to list the contents of the tomcat.keystore via this command, and notice how the entry for the 'tomcat' alias is a keyEntry, but the entry for the 'vorburgerCert' is a trustedCertEntry here - makes sense?

keytool -list -keystore ..\tomcat-server.keystore -storepass password

5) Just like with BASIC authentication, you still have to add this user to the usual conf/tomcat-users.xml, by specifying that dname from step 3) as username, and any password (will be ignored) and the roles you'd like that user to have:

<user username="CN=Michael Vorburger NEU, OU=DerTest, O=DieOrg, L=Lausanne, ST=VD, C=CH" password="null" roles="admin">

That's it! You should now be able to access e.g. https://localhost:8443, the browser should ask you for, or automatically send, the client certificate which the server used to 'strongly' authenticate you - done!

Some instructions also suggest to set <login-config><auth-method>CLIENT-CERT</auth-method> in web.xml, but apparently that's not actually needed on Tomcat with clientAuth="true" in server.xml.

Some notes for testing/debugging/trying that may be useful if you would like to try this too: IE just says "Cannot find server" if the server does not accept the client cert - not very helpful! Firefox says "Could not establish an encrypted connection because your certificate was rejected by localhost.", better. Also, if for test you remove a cert from IE again, you have to close it, all windows, shut it down - else it's still in memory although it does not appear in respective dialog anymore.


PS: Some copy/pasted background information that may be of interest... "The PKCS#12 (Personal Information Exchange Syntax Standard, specifies a portable format for storage and/or transport of a user's private keys, certificates, miscellaneous secrets, and other items. The SunJSSE provider supplies a complete implementation of the PKCS12 java.security.KeyStore format for reading and write pkcs12 files. This format is also supported by other toolkits and applications for importing and exporting keys and certificates, such as Netscape/Mozilla, Microsoft's Internet Explorer, and OpenSSL. For example, these implementations can export client certificates and keys into a file using the ".p12" filename extension.

J2SE 1.4.x provided read-only support for PKCS#12 keystores, and a small number of protection algorithms. The enhanced PKCS#12 keystore in J2SE 5 supports reads and writes of PKCS#12 keystores, and provides more protection algorithms (such as those supported by popular browsers). This improves interoperability of PKCS#12 keystores imported/exported by J2SE, browsers, and other security applications."

Sunday, August 06, 2006

Lego

Posted snaps of some LEGO constructions I made with Dév...

Monday, July 17, 2006

WSDL Validation with wsdl4j

Imagine a set up where a team A provides Web Services to Team B. The service team A uses a "contract first" SOA approach, so they hand-write WSDL (more on that another time) and generate Java from it (not the other way around). Team A then implements these services using XFire as their SOAP stack. Now Team B takes these WSDL documents and generates client code from it, using Axis1.

Now WSDL being a standard format, this is a painless no-brainer, right? Right. Turns out Axis1 has a small but annoying bug which leads to it generating invalid Java code, which does not compile, from technically perfectly valid WSDL (if the 'name' of a <wsdl:message> used for a Fault has the same name as the actual <xsd:element> referenced by the <wsdl:part> in the <wsdl:message>). Easy to slightly adapt the original WSDL - once you have figured this out.

Of course, being "agile", I don't want this problem to come back haunting the team every so often, and thought about how to enforce/test the hand-written WSDL, which is perfectly valid technically, for this specific condition, during the automated build.

Turns out that thanks to JSR 110 & wsdl4j, this is pretty painless! The little WSDLValidationTask for ant (download WSDLValidationTask.java source) that I ended up hacking together is used like this:

<taskdef name="wsdl-validation" classname="wsdlvalidation.wsdl.WSDLValidationTask" />
<wsdl-validation>
<wsdl dir="wsdldir" includes="my.wsdl" />
</wsdl-validation>

Wednesday, June 07, 2006

Making Movies

The other week-end Dév and I made an animated movie of his current favourite toys, including soundtrack. He loved it - almost as much as "real" Thomas the Engine episodes.

PS: Thanks Google Video for hosting; neat! I wonder if the link above will still work in a few years?

Simple HTTP Server in Java

Just for fun (well, almost), I wrote my very own simple HTTP server in Java (download 100 KB ZIP) the other night. It uses Java5 features, and requires no external libraries. While it certainly does work and can serve a static website HTML+image site, it is of course NOT meant as a "real" web server, so do NOT use it - it's instructional, and it sure was fun to code!

PS: Almost eight years ago, I put online a Proxy Server written in C on my website, and to my big surprise, during all the years since, every now and then a CS student (presumably asked to write one for class, just like I had at the time), emails some questions... So just in case folks, if you found this page because you are looking for a cheap way out for your homework, by all means download and look at it - but then fix/improve it, and quoting the source and explaining your enhancements! And put a quick comment to this post below.

Number of days between two dates? (Java)

Recently a needed to get the number of days between two dates in Java.

Easy, right? Quite a few pages & articles suggest, and I admit my first iteration too, was:

Calendar firstDay = new GregorianCalendar(2006, Calendar.FEBRUARY, 3)
Calendar lastDay = new GregorianCalendar(2006, Calendar.JULY, 17);

static final long DAY_MS = 1000 * 60 * 60 * 24;
int days = (lastDay.getTime().getTime() - firstDay.getTime().getTime()) / DAY_MS;

It turns out this is WRONG, for example for the two dates given (days == 163, but shold be 164!) - some rounding error. This will round correctly, as some better Web pages explain:

double daysDouble = lastLong - firstLong;
int days = (int) Math.round(daysDouble / DAY_MS); // = 164

but using the Calendar API provides a clearer, more reable and most importantly correct version, too:

assert firstDay.get(Calendar.YEAR) == lastDay.get(Calendar.YEAR); // Assumption
int days = lastDay.get(Calendar.DAY_OF_YEAR) - firstDay.get(Calendar.DAY_OF_YEAR);

Or, for more calculations of this kind, consider http://joda-time.sourceforge.net/

Friday, February 03, 2006

One Laptop per Child (OLPC) via the $100 Laptop initiative

I first really heard about the $100 Laptop by the One Laptop per Child initiative (OLPC) from news coverage of the World Summit on the Information Society (WSIS) in Tunisia, where Nicholas Negroponte and Kofi Annan co-presented a prototype. Having heard of similar initiatives before, e.g. the Indian Simputer or a concept called Nivo by Ndiyo (and undoubtedly there were and are other such initiatives), I wonder if this one will be successful.

Things do seem promising for this project... The goals are certainly very ambitious (hear: "plans to have up to 15 million machines in production within a year", "predicts could be 100 million to 150 million shipped every year by 2007") - but then, you have to set ambitious goals to reach real-world results, right? And this initiative is about something certainly worth dreaming for! The project clearly has enormous traction, and things appear to be moving: Since the UN summit in Tunisia, it has emerged that the major commercial laptop manufacturing company is involved in designing, and will be producing, the devices. In the U.S. the Governor of Massachusetts has submitted a bill to the legislature to deliver $100 laptops to all children in the state. A number of developing and emerging countries seem to be seriously interested, or have placed orders - the details are still a bit sketchy on this, as far as I could find. At the WEF in Davos a week ago a partnership with the UNDP was signed.

I do find the idea intriguing. The technical idea itself, I admit, but more importantly the vision and possible social implications this could have. Following are some of my assorted thoughts on various aspects of the $100 Laptop by the One Laptop per Child initiative.


Hardware

Let's first look at the raw hardware specifications. While all of this is probably not completely set in stone yet, the direction it seems to be taking is: AMD CPU, probably not x86-based. No moving mechanical parts (no CD/DVD-ROM, no HDD), but Flash memory. Several USB ports. A novel and innovative display, which can work in "dual mode" B&W and colour. A "flashy" exterior design. Very low power consumption; chargeable with a hand-crank generator. Linux OS and software.

Some of it is "innovative" - not mainstream today, thus untested, hear possibly risky. To push costs down as much as possible in any way, it's probably worth taking some risks for this. From what I understood, some of the major questions notably around the screen as well as the power consumption are still open.

Their approach of flash memory and no HDD may seem vanguard today, but I read the other day that this is coming from classic commercial vendors, too, and will likely be commonplace in higher-end notebooks in a not too distant future. This is certainly very useful to increase the much needed robustness for such devices - as anybody will confirm that the hardware piece that most frequently fails in computers today is spinning hard-disks.

I am curious how that innovative dual mode screen will come along... being able to switch between a low-resolution colour and high-resolution black & white mode reminds me of good ol' Atari days! I hope you don't have to reboot to switch between modes (unless rebooting is fairly quick)? Such a "modal" interface (screen) could possibly also make the software more cumbersome... have you used a recent Palm (e.g. T5) where some of the, even built-in, software can't use the full display size? What a pain! Can a dual mode screen really not be avoided? What resolutions are we talking about anyway? Is it "paper-like" crisp display, like that e-Ink stuff that you hear more and more about, most recently just these days with Sony's new eBook reader? Are such screens capable of nice shades of grey? Maybe initial versions of the 100 dollar laptop could have just that - I wouldn't bet my money that a colour screen is a must-have; maybe just a nice-to-have? Again, it depends on the intended usage, but if this would help to position it as a learning device instead of yet another, but cheaper, gaming console - then IMHO crisp B&W is a feature, not a bug!

I wonder how much memory it will finally have, both RAM and long-term storage. The official FAQ says 128MB of DRAM, with 500MB of Flash; I'd think anything under a few GB of Flash, say 10 GB Flash, would be a pity, a risk. Digital Photo Albums, anybody? It's my 2.5 year old kid's favourite application - seriously. Pre-loaded offline Encyclopedias? Space for learning software? (Maybe more storage, disk-based, could be offered centralized, where needed, through a simple out-of-the-box wireless NAS shared in one school? Another idea may be to have built-in compression of some file types for storage on the Flash; remember Stac's Stacker for MS DOS?)

No CD-ROM/DVD seems right to keep power consumption down and movable parts out - I haven't ever missed one on my Tablet PC sub-notebook that I am writing this on! (Actually, one possible use that may prove popular could be to also use this device for watching movies, both instructional as well as for after-school entertainment. Not sure if the current display design would allow for this, does using "similar technology as the one used in those cheap portable DVD players" imply the refresh rate is fast enough? Maybe an external drive could be offered that connects via USB. Or how about some peer-to-peer streaming stuff; think one drive/player per say school class, and groups of kids watching a (same) movie on several laptops? A normal standard CD-ROM for VCD reading, rather than a DVD, will do based on my experience in India; although I'm not sure what the difference in price between the two is nowadays. Such CD support is not top priority, of course.)

Lastly on to another piece: Will it have reasonable quality and intelligently positioned built-in microphone and speakers, and simple plugs for microphone and headphone (not just USB, for cost)? Firstly, for Text to Speech (TTS) which may be interesting for literacy applications? Secondly, for recording and subsequently listening to voice messages - this is not very big in "our" world, other than the voice mail on your phone; few people seem to record and attach voice messages to typed emails. But if you don't have phones at all, maybe recording a voice clip and sending it across the country in a hard disk on a motor bike (see below) could prove to be a popular usage - particularly for the parents of the kid that the laptop belongs to, who may be unable to read and write much? This idea would also be applicable if the connectivity was some kind of store-and-forward architecture.

Thirdly, and most interestingly probably, a built-in microphone and speakers would allow the laptop to be used as a VoIP device. This would require always-on connectivity and sufficient available bandwidth, but for cases where there is no or unreliable POTS connectivity, and e.g. some satellite IP link is being set up along with the laptops for the children, this usage could be hugely interesting in itself. All of the three suggested uses are not only about including a microphone and speaker, but at least as importantly about easy built-in software using the microphone and speaker.

Another aspect, more "architectural" than about individual components as above, is the classical "one user, one device" paradigm prevalent in most current PCs - and as far as I can tell in the $100 laptop. In principle I think this is the right approach... also because, somewhat to the surprise of my idealistic self, "ownership" seems to be a very important concept to children (and thus I guess all humans) - else my 3y old son wouldn't remind me that "this MY Lego, papa!" However, somewhat similar other projects in this space have suggested alternative architectures; the Simputer can be a "shared computing device" based on a built-in Smartcard Reader/Writer, and the Nivo/Ndiyo is a thin client approach - both mainly motivated as cost saving measures, I think.

I'd probably steer away from the shared device approach. As for a thin client style (each appearing to be personally owned, although completely interchangeable), the main counter argument is probably the need for maintenance/administration and general dependency on the central server, think e.g. particularly power in this context? Still, maybe providing a (much) cheaper wireless portable thin client (think one-chip LCD+wireless controller; nothing else inside, particularly no memory and real CPU, which are probably the next most expensive part after the display?), for say $20 instead of $100, plus a commoditized say $1000 Dual-CPU with 2 GB RAM server, per school/entire village, could of interest in some situations? This is assuming that the configuration and loaded software etc. of all devices would be very homogenous, which is probably a fair assumption in this context? If the server could run say 100 clients (essentially running very similar software to what was built for the full $100 laptop of 128 MB RAM each, but with all of the OS and application code shared, thus only using about 16-32 MB for per-client data) then this seems at least imaginable, and would mean a total cost of just $3000 instead of $10'000 - for the 100 children. Still, that's a lot of ifs and assumptions of course, and only real pricing, scalability and the "market" can tell if there was an interest for (also) providing this - later.

Software & Development Model

Contrary to the hardware specs, I haven't been able to find much about the planned pre-loaded software etc. yet. Should one assume the software will be relatively "standard" Linux, so something along the lines of X11, GTK, Gnome or KDE, OO and Mozilla stuff? Or a more custom developed and tailored suite of applications? (Given the only 128 MB RAM, the latter may be more likely, given that say just Thunderbird+Firefox alone easily eat up say ca. 30+50 MB. Or not; could you actually configure a "standard" desktop Linux environment to work OK on "just" 128 MB RAM?)

A key question of this aspect will likely be if the OLPC Foundation's main goal is to get "cheap raw iron" or a centrally organized software development model leading to a complete pre-loaded "educational laptop" out of the door? In the first case, individual receiving countries, groups, ministries, schools, or even individual recipients would have to search for and evaluate software options, customize, pre-load, and install software.

I assume they'll probably opt for a strong standard "image", think kernel, drivers, including hopefully good default browser, email client etc. At the same time, leaving the door open for innovation and participation by the larger community, so by no means a "locked down" box is clearly important. Particularly targeted receiving countries certainly do have a lot of talented folks, and it should be as easy as possible for folks to jump on board and start hacking and trying out interesting new applications.

So a federated development model with a strong central coordination role of e.g. the OLPC may be a suitable approach. Just how much coordination is useful probably remains to be seen, but why not e.g. a registry of suggested/needed software, a forum to coordinate software development between parties using this. Or how about volunteer summer projects for CS university students, like Google's summer of code thing?

By the way, I wonder what the OLPC partnership with Google is about anyway... I clearly see the "conceptual" links (e.g. grassroots & large scale) and understand they have provided some financial backing, but wonder if they are working on something specific together at this point? Software? Connectivity? Definitely many very smart geeks over there... and who says brainy geeks couldn't come up with useful ideas to reduce poverty in the world by improving children's education?

On another note, if standard Linux Desktop software is not applicable, maybe "picking" from other earlier projects is of any interest? Simputer software is supposedly made freely available - may be worth a look if any of it could benefit the OLPC project. Also, as there is anything from good ol' Apple Newton left over that could of interest? How about looking at NewtonScript and the data soup, and freshen it up more in the direction of "content", with the relevant synchronization, easy application building, and a modern dynamic programming language, all built-in to ease building learning-oriented applications on the OLPC device? Maybe the apparently planned inclusion of Squeak goes in this direction. Sounds pretty interesting to me; I need to look more into that stuff.

On to some more specific ideas for software: Could support for "non-real-time Internet connectivity" be built-into e.g. the browser, or even a lower level?

I myself often read some web pages that I had downloaded while on the network at home while traveling, disconnected from a network, and of course when clicking on a link you get some stupid technical error message. Why can't the thing remember I want to read the linked page later and "queue" it somewhere? This idea is probably more much more relevant in some OLPC scenarios than it is for myself; what if you are connected to the "Internet by Motorbike" say only once every two weeks, as in the Motoman project in Cambodia?

Making it possible (and easy!) to store Emails to one device, then to another, and ultimately forward to Internet when connected sounds like a great idea. Probably not just Emails, but requests to download information, publishing of content such as homepage or blog updates, etc. Doesn't it make you feel like good ol' FIDO Net is back?

The Wi-Fi Mesh Net should be seen in this light. It is probably not only about sharing a real-time always-on Internet access. For example, it may be useful to be able to send email, or easily exchange files, within an ad-hoc network of $100 laptops forming a wireless Village Area Network (VAN), without any central server infrastructure, and configuration thereof, whether DNS, DHCP or SMTP or anything of that sort.

On to another area of software: Just pre-loading some enterprise collaboration tool (you know, shared calendars, to do lists, and document manager) is probably of limited interest. However, how relevant are more specific school/classroom collaboration tools, e.g. something like the "Future Learning Environment" Fle3, in the OLPC context? How pertinent is eLearning stuff? Probably not so much in primary education or am I wrong?

In some way the software and content may be at least as important as the raw hardware in the larger picture of the OLPC vision... A quick search today revealed low-end desktop PCs to be going for somewhere in the $250 - $350 range. Prices of commercial laptops are higher, but also dropping by the month, and in the medium term (few years probably still) a sub-$100 laptop is possibly commercially available anyway, maybe also because of this project's impact on the commercial market, e.g. thanks to available innovation in display technology (that OLPC is not patenting) etc. or simply because of market price pressure. At that point of largely commoditized low-end hardware, what content and software is made available will define progress, not the hardware anymore.


Educational Tool & Content

It took me a little while to understand that this is aiming to be more than simply, say, "distributing traditional school material in electronic form". The aim is to enable children, by distributing Internet-connected laptops, to learn better - on their own, thus augmenting the traditional form of a "broadcasting-instructor-led" learning experience.

This fits with what has been observed e.g. in India by "minimally invasive education method" research, by the people around the Simputer project, where a computer was made available mounted on a wall to children who had never before used one: "(...) six-to-thirteen-year-olds can teach themselves to use computers regardless of their social, economic, ethnic and even linguistic status. (...) We always underestimate their abilities."

This is not limited to the developing world, as results from using computers in the classroom in a US state seem to demonstrate. There is something about learning, at self-paced speed, with a computer that is different... it's responsive, you can try out things, and, it's not the human teacher or the kid sitting next to you - maybe this sounds strange, but (to some kids at least?), this probably helps with learning.

I am still a little curious as to what this may translate to in practice. Only enabling Internet access is unlikely sufficient, though in my mind certainly an aspect of this; and the people behind OLPC have almost certainly thought of this. Will there be specific applications written and included with this device? For starters, e.g. literacy applications to learn or improve reading and writing skills like I believe the Simputer has for Indian languages? Other applications to teach say like basic math? And content like History, basics of Law and Human Rights? Who will decide? Once there are many, how will they be distributed?

One prime candidate, which has been mentioned by others already I believe, may be the now well-known Wikipedia. Its founder after all wrote, quote: "I'm doing this for the child in Africa who is going to use free textbooks and reference works produced by our community and find a solution to the crushing poverty that surrounds him. But for this child, a website on the Internet is not enough; we need to find ways to get our work to people in a form they can actually use." Another candidate with possibly interesting content may be the OneWorld connection. Certainly others more knowledgeable in this domain will know about other on-line communities which would naturally fit with the OLPC project.

There may be some technical challenges for local inclusion of such currently online Web-oriented content for offline reading on the device, or otherwise easily made available, unless ubiquitous Internet connectivity can be provided, but I believe such problems can be solved. If simply the amount of the content is a problem due to the limited memory (?) of the devices, maybe some sort of a distributed file system, with all devices in one school holding partial content, be suitable here? If Internet connectivity is available, but with very limited bandwidth, doing centralized local content caching on a server is of course trivial. To again avoid any central infrastructure requirements, maybe some funky peer-to-peer distributed caching could be devised?


Financing/Money

Some thoughts on the money side, how this will be financed etc. As I understand it, OLPC has received funding to get started. The idea is to produce and sell the laptop at production costs to governments. What I don't get is how the numbers would work out for the countries that are being mentioned as being interested, like China, India, Brazil, Argentina, Egypt, Nigeria, and Thailand. A $100 times how many children in primary schooling age these countries have, isn't that still a very important investment? The UNDP has officially jumped on board and will support this - I am curious if and from where the money will ultimately be made availabe.

Don't get me wrong - this most certainly seems a very worthwhile cause. In fact, it made me think of that quote about "give a hungry child a fish and you'll have fed him for the day, but teach it how to fish and you have given her the means for feeding herself a life long", or something like that. Investing in children's education should clearly be very high on any countries spending priorities.

Grants for such a large-scale program could probably come from established philanthropic foundations (The Soros', Aga Khan etc. of this world?) and first world states active in this fields. It may also particularly appeal to respective arms of commercial enterprises in the IT field; e.g. to me this sound like a match to the new Google Foundation's motto who wants "to do things at scale". And "One Laptop per Child" certainly sounds like "scale" - this doesn't look like it is about your average NGO or corporate lets-ease-our-conscience by sponsoring some classrooms somewhere with some fresh pencils, schoolbooks or just a couple of computers.

But I wonder if substantial amounts could be collected from private individual donations as well? Here is why: To me, this seems a "match" for "personalized" donations - instead of $50 into a big pool to "eradicate hunger", you'd give $100 for a tool to a child, one child. A child with a name. It's like those "adopt a child and donate $10 monthly for food" etc. schemes. Except that here, maybe the donors name would initially appear when the laptop is used, "This laptop has been sponsored for you by XYZ, plus picture. Click here to send a short Thank You Email, describing what you are planning to do with your new learning tool." It's just an idea; gauging whether there could be novel approaches to the "money input" side as well in this project.

Of course, the idea of a commercially available version of the laptop has come up as well. "Those might be available for $200, and $20 or $30 will come back to us to make the kids' laptops.". For some reason, I have some doubts that say brand conscious teenagers in developed nations would be very interested in purchasing such a device - and maybe that's quite all right. Depending on the software or content on them that sets them apart, maybe it would have similar relevance for primary level education in the developed world as it in developing countries. Similarly, I think there may well be a commercial space for a very low-cost Internet access device for the many underprivileged in developed nations, outside of the educational space. However, successfully commercializing the laptop probably shouldn't be in OLPC's scope. Now that Quanta is involved, and the $100 design is not going to be protected or any part patented - how about just letting the commercial laptop market forces deal with this? As long as it's ensured that some chargeback comes back to OLPC. Or maybe not even, as increased commercial sales would implicitly lower the production cost? (It may be hard to delineate what an OLPC-based/inspired design is after some point of diversifying commercial models; unless it's simply the use of the innovative display technology. Although, wouldn't that be almost like licensing the patent again?)


Support & Service

On to something else: Some bloggers have questioned how service would work for the $100 laptop. A fair point to raise I guess. Some thoughts:

From how I understand the model, neither OLPC nor the original manufacturer themselves would probably want to get directly involved in any service or support chain, neither end-user nor 2nd-level. I'd expect much of this to be done by local partners and intermediaries; possibly not so much in the traditional commercial hardware vendor sector, I'm thinking more of local self-service structures. But there are probably some things that can be done and thought of during the design of the $100 laptop to facilitate this aspect.

For example, the local whiz kid at school should at least be given a fair chance to unscrew the cover to say find the cable from the power plug to main board that may have fallen loose, and be able to re-solder it without burning the CPU, hopefully not placed right next to it. Widely available standard plugs & cable types probably are a good idea here - contrary to what one sometimes gets the impression commercial laptop vendors do.

Also important, a professional high-end quality verification program with ongoing feedback from the field, not really about fixing one broken device at a time, but effectively collecting information for avoiding a given known problem in the next iteration, is likely also a very important piece of the service story. By the way, the "feedback-loop" will be extremely important not just for the hardware robustness improvement, but also to learn about usage patterns, software features, etc. I wonder how this can be effectively organized at such a large scale?

Apart from all those hardware aspect, curious kids will certainly easily manage to screw up the software side of the device - and they should! A built-in hard-reset that can re-initialize the OS etc. from ROM; sort of like my ThinkPad has with a hidden partition on the HDD that can re-install without the usual Recovery CD, is very effective and useful, I found. (You always have the problem of personal data, files, and configuration settings. Some solution for that would have to be provided; e.g. easily copy to your friend's device over the wireless network?)

Last but not least, one area that comes to mind is that of environmental concerns in this context; particularly of batteries and devices to be disposed in general. This area should certainly be given thought. In my native country Switzerland all electronics equipment now has a mandatory surcharge on purchase, and that money is used to appropriately dispose old equipment, which can be returned to any point of purchase, independent of original vendor. Probably some sort of scheme like that is needed. Maybe it could be enhanced by actually returning some of the initial surcharge money to whoever turns in disposable laptops and batteries? When I was in India a few weeks ago, I noticed a scheme like this for paper recycling: Apparently you actually get money for brining old paper to recycle somewhere. It's not much, but still enough to create a small business of people going from door to door collecting paper to recycle, and consumers to keep paper instead of throwing it away into the garbage.


Misuse

Now, suspending the utopian and idealistic mood for a second, here is another aspect that went through my mind: Does any thought have to be given to things like black markets and other alike ugly phenomena? Just a thought... if these are for education distributed through the state only, wouldn't there be a danger of re-channeling devices to be sold instead? Should they be commercially available just because of this? Or to the contrary, if similar devices were sold on the official world market, would this lead to corruption diverting devices from one purpose to the other, not reaching the intended audience? This question has of course been raised before, and I have read about one idea that the devices could be "yellow like a school bus to make it unattractive to thieves" - but I wonder if it will be quite that simple?

Even if they are not channeled to black markets outside of the education system in one country - how to tackle problems of corruption leading to e.g. more devices going to privileged upper class schools instead of remote rural areas, if that is where they were intended for? Just flood one given "market" with so many machines that this is unlikely to be of any interest?

There is also, certainly valid, concern for "abuse" on the production side of things; I mean of the hardware, probably not the software. I have read some pretty horrible reports on environmental problems in China and the effects this has on some local populations; I think the article was about some place where they "recycled" used electronic equipment without any precautions when dealing with hazardous materials, with entire villages terribly sick, and ground water contaminated, etc. It would certainly be a shame, literally, and unacceptable to the project, if the production of a cheap laptop for children in one part of the world would negatively impact children, or adults, in another part of the world.


Concluding

To me, this is certainly a very exciting project. I hope the time is right for this to happen, and have a lasting impact on the world and in particular those who need it most.

Best of luck to the OLPC project - and let me know how I can help!

Friday, January 20, 2006

My Tablet Computer Experience

I have been using a Tablet for about 3 months now, and wanted to write down a few notes about the experience so far. This article is not a review of any specific model, but about the TabletPC (Convertible) concept in general. (For the interested, the exact model is the Lenovo/IBM ThinkPad X41 Tablet, with 1.5 GB RAM. This is my forth or fifth ThinkPad actually... first one was a ThinkPad 560 back in 1997!)

In short - I love it. The "convertible" concept makes sense to me. Yes, I use it as a "normal" (sub-)notebook fairly frequently for anything from writing emails to actual programming - but I love to be able to switch it to "slate" mode, at work in a meeting, while travelling e.g. in the train, at home on the sofa...

What I probably use it most for is as a compact reading device - it really does make a difference to read a PDF file in full screen portrait instead of the usual landscape dimension! Think turning your screen so that look more like a Letter/A4 paper than it probably does now while you are reading this. Similarly, I love browsing the Web from my living room couch, not sitting on a desk, with the tablet! Either in portrait or this more often in the usual landscape dimension - easy to switch! Just using the pen works well for browsing - you need a lot less keys to consume information than to contribute any... in fact not just less keys, less brain too - but that's another story. Of course, music plays from the tablet while reading or browsing... Of course, I could have a similar experience with some MP3_player+eBook_Reader+ePen+Laptop(+POTS_Phone), but for now, a "converged device" approach based on a TabletPC works just great for me.

Now, I have almost not used the character recognition for anything real... it "reasonably" works well, but... I don't have a need for it, can that be?! I do have spent many a meeting now taking notes scribbled on the tablet, instead of paper; both for emailing them to colleagues in ink form, and to later look up something - without carrying that paper notebook around anymore. I just usually don't bother, need, to convert such handwriting to text.

I do wish it would be simple to quickly scribble an email - again no ink-to-text conversion would be needed, just scribble a handwritten short note and send it off - as picture. Surprisingly, this is not easily possible with Outlook (there is a commercial 3rd party plug-in though), and certainly not with Thunderbird yet.

Something else is drawing with the TabletPC - definitely feels more natural. Should be obvious I guess, but if you do have some doubts, do some Noddy cartoon character colouring with your small kid with a Mouse and a Tablet Pen and watch what he/she pick up more naturally... I am no artist, but have occasionally used this to draw up a diagram or something. It's particularly handy to draw or jot down ideas during a presentation, with the TabletPC hooked up to a beamer... ;-)

Wednesday, December 07, 2005

LIFT06 || Life, Ideas, Futures. Together. || 2-3 Feb. 2006, Geneva, Switzerland

LIFT06 might be an interesting conference... happening in Feb, right next door! Going?

Friday, November 04, 2005

Using FeedBurner

I switched to using FeedBurner. Still have to work on vorburger.ch integration and visual...

Various: Solarmetric/BEA, Bayesian-based RSS Aggregators & Commentator

Just briefly various three notes of the last few days:
  • I am still searching for a good RSS aggregator, see my article from last week (Outfoxed) ... and very surprised there doesn't seem to be any Real Good Stuff - there is an opportunity here! What I'm looking for is a desktop reader that works well offline (ideally including pre-loading and a Read-Later list for when I'm online again), supports for reading lot's of feeds ("newspaper" stuff, "aggregate" view - not just good ol' three-pane email reader ideas), and ideally has some funky yet really useful Technorati inlinks count relevance ranking, Bayesian training (is it that hard to stuff a Bayesian thing into an RSS reader?), del.icio.us integration (both ways: I tag things outside of the RSS reader so that's what I'm interested, show my articles like that; and I could tag articles in the reader that are particularly interesting). So far I am using BlogBridge, not nirvana... OK, I guess. Read some of my BlogBridge feedback and ideas. There are some web-based ones in interesting directions, like Rojo, but I do need an offline capable thingie as I'd like to be able to read while traveling.

  • Solarmetric got acquired by BEA! Kudos to all of them: Patrick, Neelan, Greg, Brian, Abe, Marc - certainly well deserved. (I have been working with Solarmetric Kodo from since when it was still TechTrader Kodo. Always very happy with it, and truly excellent tech. support by these guys.)

  • Commentator from cenqua is worth a look if you have a second and want a good laugh...

Sunday, October 30, 2005

How to redirect and tee both stdout and stderr

Uh yeah... how did that work again, to run a UNIX command that shows both its STDOUT and STDERR on the console as usual, as progress report,yet writes both into separate files as well? Like this:

(rsync [...] | tee stdout.txt) 3>&1 1>&2 2>&3 | tee stderr.txt

Puh! (Thanks to this article for pointing me in the right direction.)

Thursday, October 27, 2005

Yo, me too, now I also flickr!

Ok ok, this was long overdue... ;-) Ta ta ta taaaa! Happy to announce: http://www.flickr.com/photos/vorburger/ - for you!

Kaboodle

So Kaboodle is what Keiron has been up to lately... Looks interesting!

It's basically a very simple "blog", specialized for bookmarks, but unlike del.icio.us and friends, the idea is that you create (some) topical pages and have a group of "items" on it - so your pages on Kaboodle become a "bag" of related things you want to share. Visitors can Vote and Comment on items on your Kaboodle pages, and your can keep your Kaboodle pages private and invite your friends only into it.

I have created two separate pages as a quick test. Am curious to see if I'll continue to actively use this... I'm... not sure. Actually, it depends for what, I think. This is just perfect to e.g. list a couple of... places to go, gifts, God knows what - and then ask friends, family, co-workers to have a look and comment and vote. I'll use it for that - only that, sorry, I don't do that every day! As a bookmark sharing social software, I think I'll probably stick with my del.icio.us for now though, and tags, search, clicking other users, etc. -- Interestingly, at least some of the pages you see on the Kaboodle homepage today show one item bookmark-ish kind of usage.

Others have blogged/written about more features that could be added, I am going to refrain - but I'm curious which direction they want to take this to... Clearly aimed at more of a "consumer" audience, and positioned somewhat differently / more specifically than general social "bookmarking", I'd expect them to add new features such as Searching, RSS, Tags etc. more conservatively, I mean slowly, if at all. Grandma doesn't tag - and doesn't need to, when creating a page with suggested dishes for the week-end get together.

I gathered from blogosphere that stuff like http://wists.com/ and http://www.clipmarks.com/ is the competition. On a very quick superficial look, Kaboodle UX seems simpler & neater - didn't check it out any further though.

Finally on a more technical note, I was skeptical when I read that when you click the "Add to Kaboodle" button you can add to your browser, Kaboodle would extract a headline, image/icon and a short summary. I have to say I am surprised how well this seems to work. Particularly the extraction of a Short Description - ignoring titles, menus and apparently trying to pick the first phrase seems reasonably robust. Can I get this method built-into an RSS reader - you know when fetching feeds with no content, get the page, strip stuff around, and just show the article's real content? ;-)

I'll leave it at this for - time to get some sleep. Keiron, best of luck with Kaboodle!!!

PS: Actually, here is one minor feature idea I ran into just while playing around: I added something to the wrong my page, and couldn't "move" it - so deleted and re-added it. Minor certainly.

Tuesday, October 25, 2005

i7: Outfoxed follow-up - is it "relevance" not "trust" ?

Actually... in my mind a somewhat related topic to Outfoxed & Co. is one "ranking", but more in the context "information relevance" than this "trust" business... I am hoping to spend some time on looking into if there is a good RSS Reader that helps decide which posts (not just which feeds) I may be interested in reading...

This article What's Wrong With: Feed Readers has some similar thoughts that I share. I'll post some extensions and further thoughts separately another time - but essentially: Is there such stuff?

I looked around a bit tonight, gave Findory a try - somehow not convinced; particularly unhappy that I can't make stuff "hide", like "done, not this, go away". Am now using RSSOwl with its integration with AmphetaRate ... probably need to train it further; would prefer it to rank My Aggregated Favorites though, not suggest new ones from additional feed. Using RSSOwl Rating I also realized that I spend too many seconds on "uh, 1 or 2, 4 or 5" - maybe a simpler binary "thumbs up, thumbs down" (like in Temboz [not tried, looks too alpha]) works better?

Then found NewsMonster and BlogBridge, both of which look promising... update to follow after some expierence using them!

PS: In my mind, some of the "ranking" idea itself have nothing to do with RSS as a format... I have been wondering how useful data for ranking, prioritization and personalization could be similarly "mined" from e.g. My Documents kind of files on my desktop, through general watching my surfing practices (outside of Personal Search - just what I click on, how much time I stay on pages, etc.) as well as maybe all what emails I get/write.

Comment spamming on blogs

Wow! This (and this) didn't take long... "comment spam" is what they call it apparently. (I hope by simply enabling an option to "show word verification for comments" this will be a one time newbie blogger thing.)

PS: I have always thought about spam with a mixture of disgust and "amazement" - I mean, that professional spammers would go to such great length to... but I hear it really is big business, I guess that explains it? Although, I am not sure what's worse, the fact of spam or that some people apparently (still, today!) presumably click on "MAKE MONEY NOW". What a world. Anyway.

I was blogged, six months ago!

I thought it was fun to notice that somebody blogged a page on my homepage (now very old) a few months ago... Actually, that's not a blog entry, that's just a copy/pasted bookmark! Well, I guess that's not such a big difference - or is it? If it isn't, then why haven't people come up with a combined technorati and del.icio.us, why do we think of these as different things? ;-O

PS: Sure, cauz it's just SO easy to del.icio.us something that you do it potentially many times a day - while you probably blog (properly) less frequently. Still, why two systems?