February 8, 2025

Meat is sexier: is that why we love it?

This is different. Out of scope, yes, but I need to publish this. It seems somewhat novel—or rather, I could not immediately find this thought when searching. Please let me know if you find a reference to similar ideas.

The background is mismatch -- there is a mismatch between the environment we are adapted to and the one we live in. Our bodies are adapted to the Stone Age (say, 100,000 years ago), when we were hunters and gatherers. We have not even adapted to farming (notable exception: some of us can digest milk). We are certainly not adapted to our modern society. For example, we love fast calories (sugar, processed carbs). In the Stone Age, this was advantageous for our survival—food was scarce, and we needed every calorie we could get. That is why we love it; we just cannot resist it. We are designed for that. Similarly, we love to relax. We are lazy by nature, we preserve our energy in front of Netflix. Good for us in the Stone Age, bad for us today.

Health problems associated with the Western lifestyle can typically be explained by this mismatch. But what about our love for meat? Why do we prefer meat over nuts and fat fish? They all have plenty of calories. Why do we gather around a fire to feast on grilled meat? Why don't we gather to feast on nuts? If nuts and fish are better for us, why haven't we developed a taste to prefer them? Shouldn't the survival rate of those who preferred nuts and fish have been higher than that of those who preferred meat?

Perhaps it (to some extent) is because meat is sexier. Men hunted in groups. The best hunters had good reproductive success with women. Women, think about it. Who is sexiest? The best hunter of a group of men -- the man who brings meat to the dinner feast? Or the man who gathers nuts? We are hardwired to support the survival of our genes. Even though nuts and fish may be better for our cholesterol levels, hunting and meat could result in more sex, more offspring. In evolution, that is what counts.

Not sure whether this is a good explanation, but at least it motivates me to lower my cholesterol by eating more plants and less saturated fat. This is not sexy. But, it is healthy.


January 9, 2021

Binson is implemented in 9 languages

I just updated binson.org and noticed that I now know of 12 implementations of Binson. In 9 languages: Java, C, JavaScript, Go, Swift, Python, PHP, Erlang, and Rust.

That's a bunch. I believe there is no implementation in C#, otherwise all major languages seem to be included in the list. Not counting Kotlin, since binson-java could be used for that.

For you who don't know, Binson is an exceptionally simple, binary serialization format. It's like JSON, but binary. And actually significantly simpler. Full implementations of a Binson serializer/deserializer typically range between 100 and 1000 lines of code. The complete spec is just two pages long. I suppose that is why there are so many implementations. In general: KISS - keep it short and simple! The most important design principle.


January 1, 2021

The Cycle Gap

 

 
The figure above shows the progress of the speed of computer processors and human brains during the last 40 years. It is reasonable to say that computers have become 10,000 times faster during this period. And human brains have remained the same.
 
We can say that a processor cycle is 10,000 times cheaper then in 1980 while a "brain cycle" costs the same. Because of this, most software code today is not optimized to save processor cycles, but rather to save brain cycles of the developers. The development cost of software is typically much higher than the cost of running the software (energy, hardware). So, in general, we do not optimize code for computers anymore, but for the human authors of it.

With the increasing cycle gap, programming languages have evolved to save brain-cycles, at the expense of computer cycles. However, when it comes to embedded programming for resource-constrained devices, the C language is still the king of the hill. It was developed in the 1970s! Very successful, but hardly the most productive language to program in. Perhaps now it is finally time for a change. I have been reading up on Rust and how it works. It certainly sounds good. It is popular. And I really like the memory model.

Will it have a chance to compete with C for embedded? Well, well. Rust is in many ways technically superior. However, C has such robust support, so many compatible tools and a vast amount of available code that can be reused. C is clearly defined (has a spec) and has multiple implementations and is very stable. A clear advantage of C is that APIs and example code for embedded processors are typically written in C, not Rust or anything else.

If vendors of embedded processors would ship with example code and APIs in Rust, instead of C, then Rust would win! However, as the situation is now, I don't dare to predict the future. Being technically superior is not enough. However, I would love to see a more modern language that will really compete with C. Also in the embedded world.

March 11, 2020

Powerpinions Kill Decision Making

Let me coin a word: powerpinion. A powerpinion is an opinion of a powerful person on a subject he lacks necessary knowledge of. Often, a powerpinion is delivered through a Power Point presentation.

Powerpinions can completely kill the decision making skills of an organization. It can go so far, that a technical decision can be taken that is obviously wrong to just about any expert on the subject.

So, how to avoid this? Perhaps the following:
  • If you are a person with power (formal or informal), please always listen very carefully to those with more knowledge than you on the subject matter.
  • Read! Study! Get the knowledge you need before having opinions on something.
  • Challenge powerpinions! Ask about the facts. What is that opinion based on? Do you have a reference to that fact? Did we ask the experts? Who is the expert on this in our organization? Can external experts be used?
  • Written communication may help. Powerpinions are often delivered orally accompanied with hard-to-interpret Power Point slides that are soon forgotten. If opinions are recorded and can be referenced in the future, it likely makes people more hesitant to have opinions om subjects they lack knowledge in.
  • Be careful with topics that seem simple at first, but are not. Cryptography is one example. Those who know the most realize how little they really know, but those who know a little, think they know it all.
Anything else? Comments are appreciated. There is probably relevant research in the area that would be interesting to consume.

Added 2023-03-06. Link to HIPPO effect text: t2informatik.de/en/smartpedia/hippo-effect/.

June 13, 2019

IoT: WAP all over again?

In 1999, the Wireless Application Protocol (WAP) was introduced. It gained an extreme amount of interest and hype the following years. We know what happened. It failed dramatically.

WAP is a set of protocols focused on the delivery of media to mobile phones. Essentially it is a whole new stack of protocols besides the already established HTTP/(TLS/)TCP/IP stack for content transfer and HTML to express the content. The protocols were motivated with the need for protocols more suited to less capable devices. It was believed that HTML+HTTP was too heavy for mobile phones.

And it was, sort of. Mobiles had bad Internet connectivity and small monochrome screens with a low resolution. However, as you know, this changed rather quickly. Now our mobile phones have megabits of bandwidth to the Internet, high-resolution color displays and very capable multi-core processors. It did not take long until it was realized that: yes, a mobile can handle the ordinary Internet protocols for content distribution: HTTP+HTML and email and so on.

I wonder:

    Is the IoT world in a WAP-phase today?

It is suggested today that the ordinary TCP/IP stack cannot be used for IoT devices because of the limitations they have in processing power, connectivity, energy consumption. Instead, special IoT protocols are suggested. In particular, CoAP/UDP is often suggested.

I don't know, but perhaps we are in a temporary phase (5-10 years) where some 8-bit IoT devices cannot speak the same language as the rest of the Internet. But will that situation last? The vast majority of the protocols that we associate with "Internet" uses TCP/IP. And nowadays and mostly TLS/TCP/IP is used to encrypt the Internet communication. This TLS/TCP/IP stack is used for the World Wide Web (HTTP), for SSH to control computers remotely, for REST to present cloud APIs, for email (IMAP, SMTP) and just about everything else on the Internet. Even Netflix uses TLS/TCP/IP to stream vast amounts of video data to its customers.

So, if we want our physical IoT devices to interact directly with the existing Internet, they should speak the same language. They should speak TLS/TCP/IP. Reusing only IP with, for example, CoAP/UDP/IP is typical for IoT devices today. However, I believe it may be only a matter of time before IoT devices also speak TLS/TCP/IP and thus can interact directly with existing Internet services without translation and with end-to-end security.

Notably, the IoT services: AWS IoT Core and Google IoT Core mandate the TLS/TCP/IP stack. So, to use those services end-to-end, a device must speak that stack. I wonder about Amazon's and Google's choice of not supporting CoAP/UDP/IP, for example. Instead of supporting such protocols for their cloud services, they advocate using a local bridge that translates, for example, CoAP/UDP/IP traffic to TLS/TCP/IP.

We will see. In 2030, we know the answer. No one knows. In the Internet world, the experts are often wrong. Me included. However, I think we should consider whether we can reuse the existing, well-established "big-Internet" protocols also for our smallest devices. Yes, there is some overhead, but it might be worth it. When we send IoT communication on the public Internet, we do have the same need for security. At least. Also, there are efforts to make the existing TLS/TCP/IP stack more efficient. Those efforts include TLS 1.3, TCP Fast Open, efficient TLS implementations, and 6LowPAN. Perhaps we should focus our energy on that instead of building up a separate, incompatible stack of protocols for IoT devices just like we did for mobile phones with WAP.

February 22, 2017

Delay attacks - the forgotten attack?

The unlock scenario

Together with colleges at ASSA ABLOY, I am working with a secure channel protocol called Salt Channel. It is open source and can be found on Github. One potential application is to control a lock (lock / unlock commands) from a credential in the proximity of the lock (RFID card, mobile phone, fob).

Typical secure channel implementations, like TLS, provide confidentiality and mutual authentication. Data integrity is also provided. They generality protect against attacks such as: replay attacks, various types of man-in-the-middle attacks and more. However, I know of no secure channel protocol that protects against delay attacks.

A delay attack is an attack where the attacker simply delays a packet in the communication. This is definitely in the scope of what an attacker is allowed to do in just about any threat model. Also, in practice, it can be easy to perform. Delaying a packet may not seem like a threat at first. Surely, it did not appear to us while developing Salt Channel v1, that a packet delay could be a security issue. Well, it can!

The figure about shows the scenario. Alice wants to unlock Lock with her phone through a radio communication channel (Bluetooth, Bluetooth Low Energy, NFC) to Lock. Mallory intercepts the communication and function as a man-in-the-middle. Alice establishes a secure channel with Lock. This is successful since Mallory simply forwards all messages between Alice and Lock. Then Alice sends the unlock message to Lock. Mallory recognizes this packet by packets size, packet ordering and so on (based on studying previous communication sessions). Mallory cannot read the contents of the package, nor modify it, however she delays it. Alice detects that she cannot open the door. Something seems to not work. She walks away. Once Alice is gone, Mallory, who is hiding near the door, sends the unlock packet. The Lock unlocks and Mallory can get in through the door without being detected.

Literature

I have not found literature focused on this issue. Perhaps I am just googling wrong, I have not studied this much. Any help is appreciated. I don't even know a name for this, so I invented "delay attack" since I could not find a term for it. Surely, this must be treated in public literature already.

Note, that this is not a replay attack. The packet is not replayed and a delay attack requires completely different counter-measures.

Note, this is not a timing attack. Even though timing is involved, timing attacks is a completely different thing and should therefore not be used for this type of attacks.

Existing delay attacks

There seem to exist practical delay attacks against car key systems.

Protection

Many application layer implementations likely do not consider packet delay attacks and their implications. It can be argued that there should be protection against delay attacks in the secure channel layer.

When Alice sends the unlock command she implicitly wants the door to open now. Not in 60 seconds. We can see this as an integrity protection of her intent to unlock now. Of course, this could be handled by the application layer. But why not put it in the secure channel layer? It sure seems like a general problem to deal with in a general way.

Perhaps another blog post will deal with countermeasures of delay attacks. We need some.

Appended

2020-02-24. These links are of interest. They use the term "delay attack".

https://tools.ietf.org/html/draft-mattsson-core-coap-actuators-06#section-2.2

https://tools.ietf.org/html/draft-liu-core-coap-delay-attacks-01


March 11, 2016

TestableThread - a Simple Way to Test Multi-Threaded Code

Multi-threaded code is hard to write and hard to test. For years, I have been missing simple tools for testing multi-threaded Java applications. Anyway, for certain types of test situations the TestableThread class can be used. See below and the TestableThread class at the java-cut repo.

The idea is simple. By introducing named "breakpoints" in code run by threads, the test management code can control the execution of a thread by telling it to go to a certain breakpoint. Breakpoints are added to normal code using statements like:

assert TestableThread.breakpoint("breakA");

By using an assert, the breakpoint code will not incur any performance penalty when the software is run in production (the default way to run Java software is without -ea, that is, without enabling assertions).

Given defined breakpoints, test code can let threads execute till they reach a given named breakpoint, for example, the code:

t1.goTo("breakA");

will enable the t1 thread to run until it reaches breakpoint "breakA". When the breakpoint is reached, the t1 thread will stop executing until it gets another goTo() request.


The implementation below is only 50 LOCs or something, but still have been shown to be useful. There are, of course, lots of improvements / additions to be made including conditional breakpoints.

Code:

public class TestableThread extends Thread {
    private final Object sync = new Object();
    private volatile String breakName;
    
    public TestableThread(Runnable r) {
        super(r);
    }
    
    /**
     * Run thread until it hits the named breakpoint or exits.
     */
    public void goTo(String breakName) {
        synchronized (sync) {
            this.breakName = breakName;
            sync.notifyAll();
        }
        
        if (getState() == Thread.State.NEW) {
            start();
        }
    }
    
    /**
     * Run thread, not stopping at any break points.
     */
    public void go() {
        goTo(null);
    }
    
    public static boolean breakpoint(String breakName) {
        if (breakName == null) {
            throw new IllegalArgumentException("breakName == null not allowed");
        }
        
        Thread thread = Thread.currentThread();
        if (thread instanceof TestableThread) {
            TestableThread tt = (TestableThread) thread;
            synchronized (tt.sync) {
                while (tt.breakName != null && tt.breakName.equals(breakName)) {
                    try {
                        tt.sync.wait();
                    } catch (InterruptedException e) {
                        throw new Error("not expected: " + e);
                    }
                }
            }
        }
        
        return true;
    }
 }