Sunday, 29 April 2012

Haskell and the World: Unicode and the Common Misuse of ByteString

Haskell's string handling is actually quite good overall. Strings are always encoded consistently, and as long as you don't leave the space of Haskell itself, you probably won't have (m)any problems. Sadly, this has caused some Haskell programmers (including myself) to become a little careless when handling strings of any kind, be they String, ByteString or Text.

So here's some basic minimum on what you need to get by with text in Haskell. Let's first get a grip on some basic terminology…

Charsets, Encodings and Codepoints

People tend to just throw these three together, but they shouldn't — they're all fundamentally different things!

  • A character set is just that. It's a set, and it contains characters. Remember: a set is a bunch of stuff. All elements of a set are unique and a set is inherently unordered. That's it.

  • An encoding defines a function that maps characters from the charset to byte strings.1 Every encoding function needs to have a retraction, i.e. it needs to be reversible. Given f, an encoding function, C its domain (the character set,) and  * Bf its codomain (the set of valid byte strings for that encoding) an encoding should satisfy the following two properties: f ⋅ f − 1 = 1 * Bf and f − 1 ⋅ f = 1C, making it an isomorphism.2

  • Code points enumerate a character set by creating another isomorphism between the charset and numbers. It is very important to note that encodings and code points are distinct!

Why do we need code points when we already have encodings? The Unicode standard for example defines a pretty exhaustive character set that tries to capture all of the world's languages and various other stuff. The code points segment this set into planes and enumerate it sequentially in a well-defined manner.

This makes it possible to refer to a particular glyph via a single number, and without having to represent said glyph graphically (say, because you lack the font) and also without having to specify a particular encoding. Lastly, this makes for a portable representation of glyphs by mere integers.

Char in Haskell

Let U be the Unicode character set, and ü,я ∈ U (Latin small letter 'u' with diaresis, Cyrillic small letter ya.) Let u: U →  ℕ be the function assigning code points to elements of U. Then u(ü) = 252, and u(я) = 1103. This is what a Char value in GHC actually represents:

λ> 'ü'
'\252'
λ> 'я'
'\1103'

Another common way of representing code points is as 3-byte wide hexadecimal byte string values. The letters ü and я are rendered as U+0000FC and U+00044F, respectively, where the first byte stands for the plane this code point belongs to, and it is sometimes ommitted in the case the code point is settled in the BMP, or basic multilingual plane.3

So the Char data type in Haskell represents characters as Unicode code points, i.e. as numbers. When you print something in Haskell, you get back a decimal representation of its code point except for printable ASCII characters, which are represented as themselves. Non-printable ASCII characters are assigned special names:

λ> "\0\1\2\3\4\5"
"\NUL\SOH\STX\ETX\EOT\ENQ"

Encoding Functions

Let B be the set of bytes (numbers from 0 to 255) and  * Butf8 to be the set of valid UTF-8 byte strings. Let futf8: U →  * B be the UTF-8 encoding function. Then futf8(ü) = 0xc3b6, futf8(я) = 0xd18f. Remember that an encoding function doesn't map characters to a number, but to byte strings. In the case of UTF-8, this is one or more bytes.

Let's also define futf16: U →  * Butf16, the little endian UTF-16 encoding function. Then futf16(ü) = 0xfc00, and futf16(я) = 0x4F04. Do you notice how the byte strings representing the code points of the letters are different from these byte strings? It all comes down to the fact that we chose LE, since UTF-16, as opposed to UTF-8, depends on endianness.

… and why Char8.pack isn't one.

So far, the functions we've seen are isomorphisms, and encoding functions should always be structure preserving! But ByteString.Char8.pack does not fulfil that property, and, consequently, it doesn't form the identity on its domain when composed with its retraction (which an isomorphism does.) The supposed "retraction" of pack is unpack.

λ> import qualified Data.ByteString.Char8 as B
λ> (B.unpack . B.pack $ "я") == "я"
False
λ> (B.unpack . B.pack $ "я") == "O"
True

Earlier I said that a way of representing unicode code points was by rendering them as 3 byte wide byte strings. Unfortunately, this is not what ByteString.Char8.pack does. Let's look at the documentation of Data.ByteString.Char8:

Manipulate ByteStrings using Char operations. All Chars will be truncated to 8 bits.

The word truncate is a big red light. You lose information.

λ> B.pack $ [toEnum 255]
"\255"
λ> B.pack $ [toEnum 256]
"\NUL"

There is no legitimate use case for ByteString.Char8.pack in production code. It exists out of pure laziness, and in order to facilitate that laziness in developers. Even if you're sure you'll only ever process English, it's naïve to assume you're going to get by with ASCII, which is likely to be the only encoding you're not going to have any problems with when using Char8.pack.

Just to drive my point home, I'll use all caps:

WHEN YOU USE ByteString.Char8.pack YOU JUST TAKE A UNICODE CODE POINT AND TRUNCATE IT TO ITS FIRST BYTE AND ARE STILL PRETENDING IT'S TEXT!

Just stop it already.

Correct Text Handling in Haskell

In most cases, you should probably just use Data.Text.4 In the case of Data.Text, you can even use its IsString instance and add the OverloadedStrings pragma so you would never notice you're using Text and not String.5 Data.Text.Encoding supplies a couple of very nice encode and decode functions to marshal your Text values to and from ByteStrings.

λ> import qualified Data.Text as T
λ> import Data.Text.Encoding
λ> :set -XOverloadedStrings
λ> :t encodeUtf8
encodeUtf8 :: T.Text -> B.ByteString
λ> encodeUtf8 "ü"
"\195\188"
λ> encodeUtf16LE  "я"
"O\EOT"

The weird output we get back from this function is actually just a ByteString rendered by its Show instance. But in my opinion, ByteString's Show instance is broken and misleading!

Rendering octets as a String like that makes no sense, because a String makes a guarantee that it represents valid Unicode code points corresponding to the intended characters. Which this does not. It would be much more sensible to just render the hexadecimal values, without making any promise about representing textual data (because ByteStrings are NOT textual data.)

λ> import Data.Hex
λ> hex . encodeUtf8 $ "ü"
"C3BC"
λ> hex . encodeUtf16LE $ "я"
"4F04"

As a side note, you can also use text-icu, which allows for direct conversions of Strings and a more comprehensive treatment of encodings.6

tl;dr

  • Don't use ByteString.Char8
  • If you have textual data, you should be representing it as Data.Text
  • ByteStrings have nothing to do with Strings — they're very different from one another.
  • ByteStrings never should be used to represent textual data. As soon as you've encoded a particular piece of text into a ByteString, treat it as binary data, and do not render it via ByteString's Show instance, but only by using the appropriate decoding and encoding functions.

  1. Technically, that's not true. An encoding can map characters to pretty much anything. Frequencies, Morse signals and other such things are all possible codomains of the encoding function, but we'll restrict ourselves to byte strings.

  2. This is a somewhat simplified view of an encoding, and in practise, it'll be desirable to break this property in order to establish certain equivalence relations; c.f. Unicode equivalence. We'll adopt this simplified view for the point of this discussion, though. Thanks to dmwit on reddit for pointing this out to me.

  3. If you ever happen run across funny U+XXXXXX sequences, DuckDuckGo allows you to decode them into their definitions and into decimal.

  4. Data.Text.pack is safe to use ;-)

  5. This extension has some problems of its own, but with Data.Text it should be safe to use.

  6. Thanks to yitz on reddit for recommending this library to me

Tuesday, 22 November 2011

Re: Highbrow Java; Or: Java Generics and Why I Still Hate Them

This, well, rant, originated as a more elaborate answer to a comment on an article I left unanswered for over half a year. I wanted to apologise by coming up with a rather exhaustive answer. Turns out I got exhausted before I could finish complaining, so I'll just post what I have right now and will continue ranting at some later stages.

Because this is so long, I'll save everyone the trouble and give an abstract: I argue that, while generics were a neccessary and "good" addition to Java, this particular implementation of what is in essence second-order typed lambda calculus, is poor and severly limits the compiler's ability to guarantee run-time type safety. Turns out I'm focussing on arrays (again,) but that was the low hanging fruit. I'll get to the lack of expressive power in a later post.

Type Safety and Type Polymorphism

Why do we care about static type systems and type safety? I think one can discern two major points, quoting from Wikipedia:

  • Static typing is a limited form of program verification
  • Program execution may also be made more efficient by omitting runtime type checks and enabling other optimizations.

I actually only care about the first point: a good type system can catch mistakes in code paths rarely taken, which may have otherwise eluded testing. It may successfully shift some of the debugging effort from run-time to compile-time. The merit of any performance gain during run-time is debatable, and certainly not a really strong argument in my opinion.

What object-oriented programmers refer to as generic programming is parametric polymorphism (as opposed to ad-hoc or subtype polymorphism, both of which Java also supports.) Parametric polymorphism is also famously captured in System F, or polymorphic (or second-order typed) lambda calculus. It introduces universal quantification over types. There's a similar system developed by Mitchell & Plotkin (1988), which introduces existential quantification instead.

Prior to 1.5, Java lacked any kind of parametric polymorphism1. The need for parametric polymorphism in a statically typed programming language should be obvious to any reader: it heightens the expressive power of the type system without compromising on type safety. Thus, generics were a step in the right direction.

An important mental note: in OOP terminology, polymorphism usually refers to subtype polymorphism. Sometimes polymorphic methods are mentioned, which correspond to ad-hoc polymorphism. When I refer to polymorphism in the rest of this text, I will mean the parametric kind, i.e. generics.

Erasure

In Java, generics are only used at compile-time. The JVM isn't even aware of the existence of generics. This is called erasure, and it exists purely for backwards compatibility reasons. It leads to several unfortunate problems with type safety.

When programming with generics in java, I'm always amazed by the amount of explicit casting one has to perform. Ideally, you'd never want to ever do that. Explicit casts can't generally be checked at compile time — we give up type safety by using them.

Reifcation

The opposite of erasure is reification — a very fancy name deriving from the latin word res (thing.) Naftalin and Wadler thus call it thingification. For us, it means that a type carries run-time information about itself. A Number knows it's a Number at run-time, and you can retrieve that information using the reflection API. A List<Number> only knows it's a List, but it has no idea about the fact that it's carrying Numbers.

The proposal for Reified Generics thus would either completely throw out type erasure, or allow some sort of explicitly reified generics syntax.

There currently are only a couple of non-reifiable types in Java:

  • type variables
  • instantiations of polymorphic types (such as List<String>)
  • bounded instantiations of polymorphic types (such as List<? extends Number>)2

Generics Have Poor Support for Arrays

Arrays in Java feel like a wart nowadays, especially in the presence of generics. There is one cruicial constraint on array creation, which makes it utterly annoying to use them: the component type of an array must be reifiable.

See, given a type variable T, T[] is perfectly valid in Java. The Collection interface defines a method <T> T[] toArray(T[] a) after all! But there's something fishy about this method: why do I need to pass it an array? And isn't Collection defined as Collection<E>? T is only in scope for this one method, as can be gleaned from the type signature. T has nothing to do with E. The following code typechecks:

Collection<String> cs = new ArrayList<String>();
Number[] na = cs.toArray(new Number[10]);

There's an alternative method Object[] toArray(), which is more along the lines of what you'd expect such a method to do. It creates the array for you, (that's the point after all,) but it creates an array of the generic Object type which you'd have to cast into something more appropriate yourself.

The cause of this idiosyncracy is simple: you cannot write code that explictly creates arrays with non-reifiable component types. Recalling our earlier list, it means that type variables and (bounded) instantiations of polymorphic types cannot be used for arrays without casting explicitly.

<T> void f() {
T[] a = new T[1]; // error
List<Integer>[] il = { Arrays.asList(1,2) }; // error
}

Both of these lines will fail with generic array creation (hooray for descriptive compiler error messages.)

But wait, how do the collection classes like ArrayList do it? Don't they have to use arrays with a polymorphic component type internally? Yes, they do. And you can create generic arrays:

T[] ta = (T[]) new Object[1]; // unchecked

Since the compiler cannot ensure the above line will actually work at runtime, it issues an "unchecked" warning for this line of code (see below.) The compiler is right, too. ta now has the run-time type Object not whatever T is! So if you instantiate T to, say String, and try to store it as a String[], you will receive a run-time error even though you didn't explicitly cast anything!

Naftalin & Wadler thus tell you to adhere to the The Principle of Truth in Advertising:

The reified type of an array must be a subtype of the erasure of its static type

That's quite a mouthful isn't it? I recommend reading the appropriate chapter 6.5 in Naftalin & Wadler (2006) for more information, but the gist is this: The run-time type of any array must be a the same as or a subtype of what is left of its compile-time type after erasure kicks in. If you don't adhere to this principle, you will get into trouble selling something as being of type a where it actually is of a completely different type b which can be anything. The compiler can't catch this, and it will result in an unchecked run-time exception most likely terminating your entire program. It's your responsibility (and it shouldn't be.)

This is the reason writing T[] toArray() is not advisable; you have to pass toArray a T[] parameter that you create yourself, and that you are responsible for. The compiler can't help you.

The solution to this is one of: a) don't mix arrays and generics3, it's bad for your mental health, or b) arcane magic aka reflection. I will leave it to the reader to figure out method b)4, since I'm tired of writing this at the moment. It's a pointless exercise anyway: the reality of the matter is, b) doesn't really exist in case you want to write a well-designed library. Why? Well…

I find the next example particularly devious (from N&W 2006, pp. 87)

List<Integer>[] ils =
(List<Integer>[]) new List[] {Arrays.asList(1)}; // unchecked
List<? extends Number> nls = ils;
ils[0] = Arrays.asList(1.01) // storing a Double in an Integer list!
int n = ils[0].get(0); // class cast exception

You just need to inadvertently pass a reference of your array with a non-reifiable type to some devious method and it might put stuff in there that will make your program crash. And the compiler never saw it coming5. N&W thus call for adherence to the Principle of Indecent Exposure

Never Publicly expose an array where the components do not have a reifiable type

These two principles together mean that you should avoid non-reifible component types in both the source and run-time, which means, you should stick rule a) don't mix arrays and generics. At all. Of course the Java standard library doesn't have to follow these rules.

The way ArrayList et al. handle the issue internally is to not externally expose any generic array they do very much internally use, which is why you have to pass your own T[] to toArray. This also entails to tread lightly whenever using generic arrays internally, as well.

Inner Classes of Polymorphic Classes May Not Be Used as Array Component Types

Now that was a long subsection title. But it really is that silly:

public class C<E> {
N[] ns = new N[10]; // *error*: Generic Array Creation
private class N { int data; }
}

This will not compile. If you omit the parameter for C, it's no problem. If you put N into its own class file, it's fine again. I'm not exactly sure why that is, since the type parameter isn't even used, and N should be a reifiable type, since it's not parametric.

No Polymorphism for Exceptions

Type erasure also leads to the awkward consequence that anything deriving from Throwable cannot be a generic type, since the JVM couldn't distinguish different instantiations of that type, but it needs to in the case of exception handling.

Static Fields of Generic Types

It also makes it impossible to have any static fields of a class have the same type as a type parameter:

public class C<A> {
static A a;
}

In order for this to work, the runtime would have to keep track of a value for a for each instantiation of type A, which is not possible, since it doesn't even know that A exists! It only ever knows about the raw type C.

Downcasting Towards a Polymorphic Type is Always Unsafe

And it can result in run-time failures with a ClassCastException that are far removed from the actual origin of the erroneous code. Consider the following snippet, loosely based on FAQ005:

void f() {
List<Date> dl = new ArrayList<Date>();
List<String> sl = (List<String>) ((Object) dl); // unchecked warning
g(sl);
}
void g(List<String> l) { String s = l.get(0); } // ClassCastException

The above code will always issue a warning, and we see why: the compiler cannot generally ensure that the code won't lead to a runtime error. The problem is: there are scenarios were doing this is actually useful, and legal: given Token, a subtype of Annotation, casting a List<Annotation> to a List<Token>, for example, which I have had to do in the context of UIMA very often will always result in a warning. The compiler cannot reason about whether the warning is justified or not.

Don't just ignore those unchecked warnings, it can end badly, and it will confuse you.

Theoretically…

I would recommend to anybody further interested in exploring the theory behind (and above!) Java's Generics, to have a look at Wadler's page on them. A lot of very interesting papers are linked there, as well as the authoritative O'Reilly book on Java Generics.


  1. Well, that's not exactly true. Technically, the array type does count for a polymorphic type

  2. Curiously, wildcards are reified, so List<?> has full run-time type information, but List<? extends Object>, which is equivalent, does not.

  3. Did I mention that varargs are actually just arrays? Yes, don't mix generics and varargs either; same restrictions apply. Some methods in the standard library use varargs, among them Arrays.asList. And this does mean that creating a List<T> or List<List<Integer>> with said method shouldn't be attempted…

  4. Hint, it involves Arrays.asList and can't avoid unchecked casts either.

  5. Well, it did issue a warning alright, but as we know, there is no way around that in case you're dealing with arrays anyway.

Friday, 18 February 2011

Vegetarian Lasagna with Black Beans

Nothing tastes quite as good as lasagna Bolognese, one of the very few meat dishes I used to eat when I actually did eat a little meat. Since I'm thoroughly grossed out by any form of allegedly 'edible' animal now, I've been looking for a lasagna that could compare to 'the real deal.' Recently, I found a delectable recipe over at epicurious.

The lasagna requires quite a bit of effort. Due to the different cheeses in it, the cost is also relatively high. I tried to make it both as easy as possible, and as cheap as possible.

Ingredients

  • 200g black beans (usually available in any Asia shop.)
  • 250g Ricotta (Italian cottage cheese. You can substitute it with pretty much any kind of cottage cheese that doesn't have too strong a taste.)
  • 3 balls of mozzarella, around 250 to 300g; (it isn't worth it to buy good buffalo Mozzarella, but don't buy the cheapest stuff either.)
  • Parmesan (or, if you want to save money, Grana Padano is equally good.)
  • 1 egg
  • 1 onion, some 3 cloves of garlic
  • 150g of black olives
  • olive oil
  • some 2-2½ cans of tomatoes in various aggregate states: puréed, skinned and diced, or just cut up.
  • 2-3 tsp. ground dried cilantro
  • basil
  • a bay leaf
  • lasagna noodles
  • jalapeños to taste
  • oregano, thyme, ground dried basil
  • salt, pepper, asafoetida, chili powder, sweet pepper powder, dried pomegranate powder
  preheat the oven to 180°C layer:
(top to bottom)
parmesan
1/3 béchamel
noodles
parmesan
1/3 mozzarella
1/3 tomato sauce
1/3 béchamel
noodles
parmesan
1/3 mozzarella
1/3 tomato sauce
bean cream
noodles
parmesan
1/3 mozzarella
1/3 tomato sauce
1/3 béchamel
noodles

bake for
30-35min
  oil in a lasagna dish
250 g ricotta mix whisk
until creamy
1 egg
25g grated
Parmesan
2-3 tsp.
cilantro
powder
2 tbsp.
olive oil
mix and blend
until creamy
250g
black beans
cook separate 1 cup
  rest of beans fold,
cook until
oil separates
onion dice mix fry until onions
are soft
mix, fry 2-3 min
salt, pepper
bay leaf
spices
chili powder
to taste
3-4 cloves
of garlic
press
2 spoons
olive oil
heat in frying pan
jalapeños
(to taste)
remove stems
and seeds, dice
150g olives remove kernels
if necessary
2 cans of
assorted
tomatoes
dice
purée
3 balls of
mozzarella
grated
Parmesan
lasagna
noodles

Thursday, 30 December 2010

The Perfect Prompt

Long ago I switched to zsh from bash for my hacking needs. I think at the time I just needed a new toy. However, over the years, zsh has proven to be a very good shell, with excellent flexibility, a healthy "Do what I Mean" attitude, and overall much more powerful globbing, auto-complete, keybinding and scripting capabilities than bash.

Part of the usual appeal of zsh are the ridiculously fancy prompts you can make it display. I was never a fan of those. My prompts have to be simple, since I'm staring at them all the time. Don't show me anything I don't need, to know! I find it hard to concentrate on something anyway, without my shell telling me how many emails I have left in my Inbox. So I created the perfect minimalist prompt that does exactly what I need it to do:
  • Display whether I'm root or not
  • How many jobs are there running in the background (if any?)
  • Did the last command finish successfully, or did the damn thing just die silently?
  • Since I'm using vi keybindings, am I in insert mode or in normal mode?
  • What machine is this shell on?
  • What directory is this shell in?
Almost all of these have defaults I don't expect to be mentioned. Usually, I have nothing running in the background, I'm on my local machine, and the last command finished successfully. At that point, I don't need to be reminded that all is well. Hide it! So here's a picture: Let's go over the features one-by one:

I don't like directories to be on the left-hand side prompt, since that'll move your prompt all the way to the right when you're in a deeply nested directory. So they're on the right, where they don't bother me, but I can still look them up easily (the blue string on the right will disappear if the cursor gets too close.)

Named directories are zsh's handy way to provide directory shortnames. Instead of going to the lengthy ~/Documents/src I can just go to ~src. I can define similar hashes for projects I'm working on, say ~myproject for ~/Documents/src/java/myproject.

Whenever there's a background job (like sleep in this example) I get a yellow number before my prompt mark. This is a counter for the background jobs. It disappears if there aren't any.

The shell checks the variable SSH_CLIENT to see if it's on a remote or local machine. On a local machine, nothing happens, but on a remote machine (i.e. a machine I'm accessing via SSH — the client always sets SSH_CLIENT when connecting) it displays the leftmost part of the machine's subdomain. I don't need to know I'm on a local machine when I'm sitting in front of it! But I do want to know if a shell doesn't belong to the machine I'm currently working on.

Similarly to the background job display, I don't want to always know every program's return code. I'm interested in the ones that failed. If a program returns non-zero, I see a big fat red number signalizing error.

Finally I like to know which mode I'm in. The blue > indicates insert mode, and a yellow indicates normal mode. Like this:

Since it's tracked by version control, I'll link you to my .zshrc. My shell config changes periodically, and no doubt even this "perfect prompt" will experience revisions. Use the commit log to find commits to .zshrc that look like this one.

Finally, since my config's rather big, maybe someone will find something else useful there :-)

Sunday, 12 September 2010

The Catsters YouTube Channel: Lectures on Category Theory

Just quickly posting to mention my finding a neat little YouTube channel with lectures on Category Theory: The Catsters.

The folks on the channel giving the lectures are enthusiastic nerds (what else?) and their lectures are actually really well done. One can even read what they scribble on the board. Usually.

The quality of the videos is pretty poor though, and the buzzing of the sound and poor balance of voice and ambient noises, as well as the amplification problems (sometimes, especially on the Monads video, the voice just blows out the mic amp's range) make it a little hard to follow the lectures. It won't, of course, prevent a real enthusiast from watching them. The explanations' quality is good (as far as I could tell.)

I increasingly find the lecture format of Internet videos a little nicer than your average run-of-the-mill lecture, where you sit with some 100 folks in one room. Granted, my own course of studies usually put me into rooms of no more than 20 people, but I still find it quite useful to be able to view lectures on demand. There's something to be said for both formats.

Go check it out! And I'll try to find some original content to post again :-P

Tuesday, 6 July 2010

Generalised Algebraic Data Types in Haskell

I will just ignore the fact that I haven't posted anything here in more than 1.5 years. It's still my blog :-) Maybe there's gonna be a follow-up on how/what I was doing, but right now, I'll skip right over it.

Heinrich Apfelmus has posted an absolutely enlightening video explanation of Haskell's generalised algebraic data type system on his blog. It gives a newcomer quite a nice perspective on why GADTs are so powerful, and how one can use them. There's a couple of caveats: first, if you're totally new to Haskell this won't make too much sense to you. You'll have to understand at least data type declarations, type constructors and maybe the infix function notation (and the fact that every type constructor is a function!)

The example he chose (algebraic expressions) is quite nice & simple, but it's also a toy example. It would be nice to see GADTs used in an example that is still intuitive but maybe a little more... engaging. I'm thinking of 2-sorted Typed logic, and/or typed lambda-calculus with embedded FOPL. I'm currently implementing something along those lines and hope to release the code soon :-)

EDIT: I just discovered a newer post of his about the fixed point combinator; and it even taught me some new and interesting things! I really like the style of his videos... and his cute accent ^^.

Thursday, 4 December 2008

The Good the Bad and the Funny

Credit where credit is due. Vladimir Putin may be a despisable person, but, damn, he's so Russian, he sweats Vodka:

French media had quoted Putin as saying in a heated conversation with French President Nicolas Sarkozy in Moscow on August 12 that Saakashvili should be "hung by his balls" for starting the war which was roundly condemned by the West.
Source: Reuters

The Russian soul is an intrepid one. They like their unbashed, dauntlessly honest attitude, they sell it as a feature, where some might consider it outright rude. Can't say I blame them for that. Hearing one of the most "important" persons on the world talking like that does find favor with me. That's probably my Slavic side, or maybe even the faint Mongolic or Thrakian genes, so I don't expect you westerners to understand that. It's great, because it's stupid, but it's manly, which is even more stupid, and that in turn, makes it great again.

He goes on:

"Seriously speaking, both me and you know about tragic events in another region of the world, in Iraq, invaded by American troops due to a concocted pretext of searching for weapons of mass destruction," said Putin.

"They found no weapons, but hanged the head of state, albeit on other charges ... " said Putin, referring to the 2006 execution of former Iraqi President Saddam Hussein.

"I believe it is up to Georgia's people to decide what kind of responsibility must be borne by those politicians who led to these harshest and tragic consequences," he said.
ibid

No matter what you may think about Mr. Putin, it does take quite some self-confidence to accuse the Americans so openly of wrongdoing. Demanding to hang the Georgian president by his balls for attacking another country under "false" premises is indirectly asking to do the same thing to George. Bush. Jr. Not that I would mind.

And I'd like to take that as an opportunity to remind everybody that, despite these brutish words, Putin is a man of extreme subtlety, and no doubt even those words were duly prepared and carefully placed (…or maybe I overestimate him?) I still think he's one of the most dangerous people on this planet (known to me, anyway) and I'm watching the current political "developments" in Russia with great unease.