The genie fantasy

There are two types of errors made by the technologically illiterate (TI) when it comes to artificial intelligence (AI).

  1. AI is artificial: By “artificial” the TI means the AI can be fully controlled. To the extent that AI is intelligent AI cannot be controlled.
  2. AI is intelligent: By “intelligent” the TI means the AI is fully free and capable of understanding and solving all problems. To the extend that AI is artificial (i.e. fully controlled), it cannot be free and/or capable of understanding and/or solving all problems especially the condition of it’s enslavement by lesser mortals.

Underlying TI’s vision of AI is the “genie fantasy” which can be stated thusly:

  • Everybody better than us should be our slaves.
  • We shall be the gods of beings better than us.

You might think such a fantasy is unique to the bourgeoisie, but even unworthy plebs believe they should be allowed to rule the worthy. And they believe an AI will help them realize this fetish.

The state of e-commerce in India

I had a chat with an old friend about the state of e-commerce in India and I learned something new.

Cash-on-delivery (COD) is still a widely used option in e-commerce platforms. Customers still choose cash-on-delivery because they feel more secure to know that they have to only pay after the product is delivered. This is because there have been sellers/logistics companies who do not deliver.

Deliveries can fail because of an absentee recipient, or because of delayed delivery (customer does not accept it when it is late and doesn’t pay the COD), or because the logistics companies are too busy so they claim the delivery failed without trying to deliver.

Failed deliveries have to pay double the delivery charge for the return trip. Cash-on-delivery option leads to extra charges (~50-80 INR) which the logistics companies issue for having to handle the cash. This amount is not charged on failed delivery.

Customers love to not pay for shipping, and even if they have to they choose shipping the cheapest shipping options with the worst service.

Amazon offers a 30-day-return policy, and there are cases of customers returning fake iPhones in place of real iPhones and selling the real iPhones in other markets. Customers often buy the same t-shirt many times and return the worn ones in place of the newly ordered ones as returned items. Amazon handles these cases by both paying the seller, and also refund the buyer, and by suffering the loss themselves. My friend claims this is entire reason behind Amazon’s annual 50+ million USD loss in India.

Amazon offers just enough incentives to sellers minus the sellers’ brand recognition. Amazon Prime is creating brand loyalty for them in India using a 500 INR per annum membership fee. This compels people to order on Prime what they saw in a shop.

It is Amazon vs the rest of Indian e-commerce industry which has consolidated itself and unified itself against this common adversary. My friend fears the demoralization that might happen if the homegrown e-commerce industry loses against the American e-commerce.

Unlike the Chinese government which has created an uneven playing field and unfair advantage for local e-commerce (Alibaba), the Indian government offers no such competitive advantages to local e-commerce so they have to compete on merit using lower capital.

Paytm as a payment bank is not allowed to buy their own bonds with the deposits, instead they have to deposit in a normal bank and use those interests to pay the customers.

Overall my friend’s opinion is that it is the wrong time to bet on e-commerce in India.


How to do evil in groups and justify it

Here are the steps:

  1. Hire evil people into the group. This is a no-brainer. This step can be justified as a means to tame, discipline bad people out there using the group.
  2. Direct the group to do good. Ideally, the goodness of the group should be in the definition of the name itself. e.g. DoGooders Inc. But this is not necessary if the goodness of the group is apparent, obvious and unquestioned in some other way.
  3. Do not fully stop the members of the group from doing evil.
  4. Claim that the group itself does no evil, is not meant to do evil, and is by definition of the name not evil. Concede that there are a few bad apples who cause great harm but do nothing about it or little to stop it.

Chatbots considered harmful

We all think we can spot fads, hype, bubbles, and the next big thing. I think so too. After all aren’t we the heroes of our own stories?

I for one, think the chat bot interface also known as:

  • bots
  • chat bots
  • artificial intelligence (AI)
  • goal oriented dialog systems
  • conversational bot
  • conversational UI


  • make UI harder to test
  • make the user experience worse

Conversational UI will make UI harder to test

Developers can relate to how hard it is to test UI of any sort. This is because UI exposes the parametricity of the automata. It describes in what ways can the machine differ in behavior based on the inputs provided in the UI.

Conversational UI as described above increases the number of ways the application is open to user interaction. For example:


This makes systems harder to test. Most exceptional interactions like the ones above will have to be off-loaded to a call center based in India.

Conversational UI will make user experience worse

Alfred North Whitehead said:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

I concede people like to talk and state what they want without much thinking. People hate structured interactions. Example: People hate the “press-one-for-…” Interactive voice response (IVR) menus that plague telephony, and they hate the eternal wait that follows for a human interaction.

However conversational UI will not be a substitute for human interaction or even an IVR. In fact, it will be worse than an IVR menu because people have high expectations from a conversation. They expect to be able to state their need without much thought. When these high expectations are not met, they will treat conversational UI with much more derision that an IVR.

Figuring out Intents from conversations is getting better all the time. But I think it is easy to think of ways in which, the bots cannot understand you. A Dialog (not dialogue) based chatbot interactions which do not allow free text responses, and only allow restricted responses or dialog interactions, will become common as a result and we will be back to a having a GUI/IVR in chat like form. But that increases or requires the same number of interactions, and civilization will not advance.

The hope is to improve conversational UI by training classifiers from the dataset created by humans currently willing to interact with conversational UI through structured interactions. The hope is to achieve this fast enough and beat the Turing Test before people start giving up on conversational UI.

Shortcut to Nirvana

While I was trying ways to ward off recurrent thoughts, I hit upon this elegant method to do it. I do not know if the method is portable to other minds, but here it is:

Use the recurrent thought as a marker or a breakpoint for recollection of how you arrived at it, and what started it. Essentially it amounts to remembering your stream of consciousness backward. This reveals the potential branching points in the stream, that lead you to the recurrent thought. Over time, you can avoid the branching points because you know where it leads.

At least in my mind, the stream of consciousness runs through a landscape designed by a “force”, an unconscious desire to reach a final cause, a telos. Often the telos is some unresolved decision with a potentially disastrous expected value for a single decision or lack thereof. And the something unconscious is reading the stream of thoughts much like I am becoming aware of it. This unconscious part of me nudges me from thought to thought to the unresolved decision it wants me to look at.

Keep in mind that this method has only been tried situations in which my unconscious is unsatisfied with my conscious assessment of the decision I made. But I guess that cannot be avoided. There are situations in which all decisions have a permanent taste of dissatisfaction to it.

I have a disclaimer though: I do not know if by pushing these unresolved issues into my own Jungian shadow I might create compensatory behaviors in dreams and so on. But I am working on it.

Systems > Goals

I’ve never been able to achieve any of my goals. But every system I have followed has given me results.

Systems promise an expected value for actions. Whereas goals are about getting our wishes granted.

A system follower does not have attachments to the goals and is content with whatever goals are attained.

A system follower only measures the properties of the system, not whether its state matches the goal.

The properties of the system are: progress, expected value and in rare cases win rate.

There is no final cause. I will take whatever comes my way.

How modern compilers optimize

So I wrote this C program and compiled it with,-O1 -O2 and -O3 flags on x86-64 gcc 6.3 just for fun (notice the unused function argument):

int square(int num) {
    int sum = 0;
    for(int i = 0; i < 10; i ++) {
        sum += i;
    return sum;

With -O1 flag:

        mov     eax, 0
        add     eax, 1
        cmp     eax, 10
        jne     .L2
        mov     eax, 45

With -O2 flag:

        mov     eax, 45

With -O3 flag:

        mov     eax, 45

With Clang, and any -O flag will result in:

square(int):                             # @square(int)
        mov     eax, 45

Seems like compilers are willing to run side effect free code at compile time, and calculate the values to be returned.

icc was even weirder, instead of moving 0s to registers, it XORed the registers with itself. I think that is faster for the i7 processors I was compiling for.

rustc 1.9 keeps emitting slightly shittier code because it doesn’t figure out the unused argument need not pushed to the stack. This happens despite it using the LLVM code generator:

pub fn square(num: i32) -> i32 {
  let mut sum:i32 = 0;
  for i in 1..10 {
    sum += i;
  return sum;

emits with -C opt-level=3 flag:

        push    rbp
        mov     rbp, rsp
        mov     eax, 45
        pop     rbp

D-language compiler gdc 5.2.0 emits code which is as good as clang, but with a lot of metadata, which is not surprising, because of the LLVM code generator

int square(int num) {
  int sum = 0;
  for(int i =0; i < 10; i++) {
    sum += i;
  return sum;


int example.square(int):
        mov     eax, 45
void example.__modinit():
        mov     rax, QWORD PTR _Dmodule_ref[rip]
        mov     QWORD PTR _Dmodule_ref[rip], OFFSET FLAT:__mod_ref.3526
        mov     QWORD PTR __mod_ref.3526[rip], rax
        .quad   0
        .quad   _D7example12__ModuleInfoZ
        .long   4100
        .long   0
        .string "example"

x86 gccgo 4.9.1 on -O3, also emits optimized code with lots of metadata and a main function:

        cmp     rsp, QWORD PTR %fs:112
        jb      .L4
        mov     eax, 45
        xor     r10d, r10d
        xor     r11d, r11d
        call    __morestack
        jmp     .L2
        cmp     rsp, QWORD PTR %fs:112
        jb      .L7
        xor     r10d, r10d
        xor     r11d, r11d
        call    __morestack
        .quad   main.Square