Category: Mozilla

The Mozilla Blog: New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing

New community effort will create a new cross-platform, cross-device computing runtime based on the unique advantages of WebAssembly 

MOUNTAIN VIEW, California, November 12, 2019 — The Bytecode Alliance is a newly-formed open source community dedicated to creating new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI). Mozilla, Fastly, Intel, and Red Hat are founding members.

The Bytecode Alliance will, through the joint efforts of its contributing members, deliver a state-of-the-art runtime environment and associated language toolchains, where security, efficiency, and modularity can all coexist across the widest possible range of devices and architectures. Technologies contributed and collaboratively evolved through the Alliance leverage established innovation in compilers, runtimes, and tooling, and focus on fine-grained sandboxing, capabilities-based security, modularity, and standards such as WebAssembly and WASI.

Founding members are making several open source project contributions to the Bytecode Alliance, including:

  • Wasmtime, a small and efficient runtime for WebAssembly & WASI
  • Lucet, an ahead-of-time compiler and runtime for WebAssembly & WASI focused on low-latency, high-concurrency applications
  • WebAssembly Micro Runtime (WAMR), an interpreter-based WebAssembly runtime for embedded devices
  • Cranelift, a cross-platform code generator with a focus on security and performance, written in Rust

Modern software applications and services are built from global repositories of shared components and frameworks, which greatly accelerates creation of new and better multi-device experiences but understandably increases concerns about trust, data integrity, and system vulnerability. The Bytecode Alliance is committed to establishing a capable, secure platform that allows application developers and service providers to confidently run untrusted code, on any infrastructure, for any operating system or device, leveraging decades of experience doing so inside web browsers.

Partner quotes:

Mozilla:

“WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers. This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way. Together with our partners in the Bytecode Alliance, Mozilla is building these new secure foundations—for everything from small, embedded devices to large, computing clouds,” says Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly.

Fastly:

“Fastly is very happy to help bring the Bytecode Alliance to the community,” said Tyler McMullen, CTO at Fastly. “Lucet and Cranelift have been developed together for years, and we’re excited to formalize their relationship and help them grow faster together. This is an important moment in computing history, marking our chance to redefine how software will be built across clients, origins, and the edge. The Bytecode Alliance is our way of contributing to and working with the community, to create the foundations that the future of the internet will be built on.”

Red Hat: 

“Red Hat believes deeply in the role open source technologies play in helping provide the foundation for computing, from the operating system to the browser to the open hybrid cloud,” said Chris Wright, senior vice president and Chief Technology Officer at Red Hat. “Wasmtime is an exciting development that helps move WebAssembly out of the browser into the server space where we are experimenting with it to change the trust model for applications, and we are happy to be involved in helping it grow into a mature, community-based project.”

Useful Links:

About Mozilla

Mozilla has been a pioneer and advocate for the web for more than 20 years. We are a global organization with a mission to promote innovation and opportunity on the Web. Today, hundreds of millions of people worldwide use the popular Firefox browser to discover, experience, and connect to the Web on computers, tablets and mobile phones. Together with our vibrant, global community of developers and contributors, we create and promote open standards that ensure the internet remains a global public resource, open and accessible to all.

The post New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing appeared first on The Mozilla Blog.

Hacks.Mozilla.Org: Announcing the Bytecode Alliance: Building a secure by default, composable future for WebAssembly

Hacks.Mozilla.Org: Announcing the Bytecode Alliance: Building a secure by default, composable future for WebAssembly

Today we announce the formation of the Bytecode Alliance, a new industry partnership coming together to forge WebAssembly’s outside-the-browser future by collaborating on implementing standards and proposing new ones. Our founding members are Mozilla, Fastly, Intel, and Red Hat, and we’re looking forward to welcoming many more.

Three wasm runtimes (Wasmtime, Lucet and WAMR) with linked arms under a banner that says Bytecode Alliance and saying 'Come join us!'

We have a vision of a WebAssembly ecosystem that is secure by default, fixing cracks in today’s software foundations. And based on advances rapidly emerging in the WebAssembly community, we believe we can make this vision real.

We’re already putting these solutions to work on real world problems, and those are moving towards production. But as an Alliance, we’re aiming for something even bigger…

Why

As an industry, we’re putting our users at risk more and more every day. We’re building massively modular applications, where 80% of the code base comes from package registries like npm, Pypy, and crates.io.

Making use of these flourishing ecosystems isn’t bad. In fact, it’s good!

The problem is that current software architectures weren’t built to make this safe, and bad guys are taking advantage of that… at a dramatically increasing rate.

What the bad guys are exploiting here is that we’ve gotten our users to trust us. When the user starts up your application, it’s like the user’s giving your code the keys to their house. They’re saying “I trust you”.

But then you invite all of your dependencies, giving each one of them a full set of keys to the house. These dependencies are written by people who you don’t know, and have no reason to trust.

A user entrusting their keys to an app. Then that app turns around and gives copies of the keys to all the dependencies. Two of the dependencies have masks on and say 'Look at that poor, lonely bitcoin. I’ll find a nice home for it' and 'look at what a lovely file system they have'

As a community, we have a choice. The WebAssembly ecosystem could provide a solution here… at least, if we choose to design it in a way that’s secure by default. But if we don’t, WebAssembly could make the problem even worse.

As the WebAssembly ecosystem grows, we need to solve this problem. And it’s a problem that’s too big to solve alone. That’s where the Bytecode Alliance comes in.

A personified WebAssembly logo holding a shield and saying 'I stand for the user!' vs a WebAssembly logo sitting on the couch saying 'I am busy. Cant the users protect themselves?'

What

The Bytecode Alliance is a group of companies and individuals, coming together to form an industry partnership.

Together, we’re putting in solid, secure foundations that can make it safe to use untrusted code, no matter where you’re running it—whether on the cloud, natively on someone’s desktop, or even on a tiny IoT device.

With this, developers can be as productive as they are today, using open source in the same way, but without putting their users at risk.

This common, reusable set of foundations can be used on their own, or embedded in other libraries and applications.

Currently, we’re collaborating on:

Runtimes:

  • Wasmtime is a stand-alone WebAssembly runtime that can be used as a CLI tool or embedded into other systems. It’s very configurable and scalable so that it can serve as the base for many use-case specific runtimes, from small IoT devices all the way up to cloud data centers.
  • Lucet is an example of a use-case specific runtime. It’s ideal for fast CDNs and Edge Compute, using AOT compilation and other techniques to provide low-latency and high-concurrency. We are refactoring it to use Wasmtime at its core.
  • WebAssembly Micro Runtime (WAMR) is another use-case specific runtime. It’s ideal for small embedded devices that have extremely limited resources. It provides a small footprint and uses an interpreter to keep memory overhead low.

Runtime components:

  • Cranelift is emerging as a state-of-the-art code generator. It is designed to generate optimized machine code very quickly because it parallelizes compilation on a function-by-function level.
  • WASI common is a standalone implementation of the WebAssembly System Interface that runtimes can use.

Language tooling

  • cargo-wasi is a lightweight Cargo subcommand that compiles Rust code to target WebAssembly and the WebAssembly System Interface for outside-the-browser use.
  • wat and wasmparser parse WebAssembly. wat parses the text format, and wasmparser is an event-driven library for parsing the binary format.

And we expect this set of projects to expand as we grow the Alliance.

Our members are also leading the WASI standards effort itself, as well as the Rust to WebAssembly working group.

Who

The founding members of the Bytecode Alliance are Mozilla, Fastly, Intel, and Red Hat.

We’re starting with a lightweight governance structure. We intend to formalize this structure gradually over time. You can read more about this our FAQ.

As we said before, this is too big to go it alone. That’s why we’re happy to welcome new members to the Alliance. If you want to join, please email us at hello@bytecodealliance.org.

Here’s why we think this is important:

WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers. This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way. Together with our partners in the Bytecode Alliance, Mozilla is building these new secure foundations—for everything from small, embedded devices to large, computing clouds.

— Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly

Fastly is very happy to help bring the Bytecode Alliance to the community. Lucet and Cranelift have been developed together for years, and we’re excited to formalize their relationship and help them grow faster together. This is an important moment in computing history, marking our chance to redefine how software will be built across clients, origins, and the edge. The Bytecode Alliance is our way of contributing to and working with the community, to create the foundations that the future of the internet will be built on.

Tyler McMullen, CTO at Fastly

Red Hat believes deeply in the role open source technologies play in helping provide the foundation for computing from the operating system to the browser to the open hybrid cloud. Wasmtime is an exciting development that helps move WebAssembly out of the browser into the server space where we are experimenting with it to change the trust model for applications, and we are happy to be involved in helping it grow into a mature, community-based project.

— Chris Wright, senior vice president and Chief Technology Officer at Red Hat

So that’s the big news! 🎉

To learn more about what we’re building together, read on.

The problem

The way that we architect software has radically changed over the past 20 years. In 2003, companies had a hard time getting developers to reuse code.

Now 80% of your average code base is built with modules downloaded from registries like JavaScript’s npm, Python’s Pypy, Rust’s crates.io, and others. Even C++ is moving towards enabling an ecosystem of composable modules.

The Mozilla Blog: New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing 1

This new way of developing applications has made us much more productive as an industry. But it has also introduced gaping wide holes in our security. And as I talked about above, the bad guys are using those holes to attack our users.

The user is entrusting us with the keys to their house and we’re giving that access away like candy… and that’s not because we’re irresponsible. It’s because there’s huge value in these packages, but no easy way to mitigate the security risks that come from using them.

More than this, it’s not just our own dependencies that come along for the ride. It’s also any modules that they depend on—the indirect dependencies.

Modules passing keys down the dependency tree

What do these developers get access to?

  1. resources on the machine—things like files and memory
  2. APIs and syscalls—tools that they can use to do things to those resources

system resources on one side, including memory, file system, and network connections. Host-provided APIs and syscalls on the other side, including open, write, getrandom, clock, and usb_make_path

This means that these modules can do a lot of damage. This could either be on purpose, as with malicious code, or completely accidentally, as with vulnerable code.

Let’s look at how these attacks work.

Malicious code

Malicious code is written by the attacker themselves.

Attackers often use social engineering to get their package into applications. They create a package that has useful features, and then sneak in some malicious code. Once the code is in the app and the user starts up the app, the code can attack the user.

Dependency tree with every module holding keys. One of them is a malicious module saying Let's take a look at that file system

This is how a hacker stole $1.5 million worth of cryptocurrency (and almost $13 million more) from users, starting in March of this year.

  • Day 0 (March 6): The attacker published a module to npm: electron-native-notify. This seemed useful—it helped Electron apps fire off native notifications in a way that worked across platforms. It didn’t have any malicious code in it, yet.
  • Day 2: To pull of the heist, the attacker has to get this module into the cryptocurrency app. The vector they choose is a dependency in an application that helps users manage their cryptocurrency, the Agama Wallet.
  • Day 17: The attacker adds the malicious payload.
  • Day 41-66: The app is rebuilt, pulling in the most recent version of the dependency, and electron-native-notify with it. At this point, it starts sending user’s “seeds” (username/password combos) to a server. The attacker can then use these seeds to empty users’ wallets.
  • Day 90: A user alerts npm to suspicious behavior in electron-native-notify, and they notify the cryptocurrency platform, which moved funds from vulnerable wallets to a secure one.

Let’s look at what access this malicious code needed to pull off the attack.

It needed the seed. To do this, it got access to memory holding the seed.

Then it needed to send the seed to a server. For this it needed access to a socket, and access to an API or syscall for opening that socket.

Diagram of the system resources and syscall needed to pull this off

Malicious code attacks are on the rise as more and more attackers realize how vulnerable our systems of trust are. For example, the number of malicious modules published to npm more than doubled from 2017 to 2019. And npm’s VP of security points out that these attacks are getting more serious.

In 2019, we’re seeing more financially motivated attacks. Early attacks were focused on shenanigans—deleting files and trying to steal some credentials. Real basic attacks, kind of smash and grab style. Now end users are the target. And [the attackers] are trying hard to be patient, to be sneaky, to really have a well planned out attack.

Vulnerable code

Vulnerabilities are different. The module maintainer isn’t trying to do anything bad.

The module maintainer just has a bug in their code. But attackers can use that bug to trick their code into doing something that shouldn’t do.

Dependency tree with a vulnerable module talking on the phone to an attacker, asking what the attacker wants it to do

For an example of this, let’s look at ZipSlip.

ZipSlip is a vulnerability found in modules in many ecosystems: JavaScript, Java, .NET, Go,Ruby, C++, Python… the list goes on. And it affected thousands of projects, including ones from HP, Amazon, Apache, Pivotal, and many more.

If a module had this vulnerability, attackers could use it to replace files anywhere in the file system. For example, the attacker could use it to replace .js files in the node_modules directory. Then, when the .js file was required, the attacker’s code would run.

Attacker telling dependency to unpack a zip file that will overwrite a file in the node modules directory

So how did this work?

The attacker would create a file, and give it a filename that included ‘../’, building up a relative file path. When the vulnerable code unzipped it, it wouldn’t sanitize the filename. So it would call the write syscall with the relative path, placing the file wherever the attacker wanted it to go in the file system.

So what access did the vulnerable code need for the attacker to pull this off?

It needed access to the directory with the sensitive files.

It also needed access to the write syscall.

Diagram of the system resource and syscall needed to pull this off

Vulnerabilities are on the rise, as well, with an 88% increase in the last two years. As Snyk reported in their 2019 State of Open Source Security Report:

In 2018, vulnerabilities for npm grew by 47%. Maven Central and PHP Packagist disclosures grew by 27% and 56% respectively.

With these risks, you would think that patching vulnerabilities would be a high priority. But as Snyk found, only 59% of packages have known fixes for disclosed vulnerabilities, because many maintainers don’t have the time or the security know-how to fix these vulnerabilities.

This leads to widespread vulnerability, as these vulnerable modules are depended on by other modules. For example, a study of npm modules found that up to 40% of packages depend on code with at least one publicly known vulnerability.

How can we protect users today?

So how can you protect your users against these threats in today’s software ecosystem?

  • You can run scanners that detect fishy coding patterns in your code and that of your dependencies. But there are many things these automated tools can’t catch.
  • You could subscribe to a monitoring service that alerts you when a vulnerability is found one of your dependencies. But this only works for those that have been found. And even once a vulnerability has been found, there’s a good chance the maintainer won’t be able to fix it quickly. For example, Snyk found in the npm ecosystem that for the top 6 packages, the median time-to-fix (measured starting at the vulnerability’s inclusion) was 2.5 years.
  • You could try and do manual code review. Whenever there’s an update in a dependency, you’d review the changed lines. But if you have hundreds of modules in your dependency tree, that could easily be 100,000 lines of code to review every few weeks.
  • You could pin your dependencies, so that malicious code can’t get in until you’ve had a chance to review. But then fixes for vulnerabilities would be stalled, leaving your app vulnerable longer.

This all makes for a kind of no-win scenario, which makes you want to throw up your hands and just hope for the best. But as an industry, we simply can’t do that. There’s too much at stake for our users.

All of these solutions try to catch malicious and vulnerable code. But what if we look at this a different way?

Part of the problem was the access that these modules had. What if we took away that access?

system resources and syscalls with red no-access signs crossing them out

We’ve faced this kind of challenge as an industry before… just at a different granularity.

When you have two programs running on a computer at the same time, how do you know that one won’t mess with the other? Do you have to trust the programs to behave?

No, because protection is built into the operating system. The tool that OSs use to protect programs from each other is the process.

When you start a program, the OS fires up a new process. Each process gets its own chunk of memory to use, and it can’t access memory in other processes.

If it wants to get data from the other process, it has to ask. Then the data is sent across the process boundary. This makes sure each program is in control of its own data in memory.

Two processes, each with its own memory. A module in one process says 'Hey, I am sending over some data'. Then it has to serialize the data, send it across a pipe that connects the two processes, and then deserialize it.

This memory isolation does make it much safer to run two programs at the same time. But this isn’t perfect security. A malicious program can still mess with certain other resources, like files in the file system.

VMs and containers were developed to fix this. They ensure that something running in one VM or container can’t access the file system of another. And with sandboxing, it’s possible to take away access to APIs and syscalls.

So could we put each package in its own little isolated unit? Its own little sandboxed process?

That would solve our problem. But it would also introduce a new problem.

All of these techniques are relatively heavyweight. If we wrap hundreds of packages into their own sandboxed process, we’d quickly run out of memory. We’d also make the function calls between the different packages much slower and more complicated.

A large dependency tree of heavy weight processes connected by pipes, vs a small tree of modules

But it turns out that new technologies are giving us new options.

As we’re building out the WebAssembly ecosystem, we can design how the pieces fit together in a way that gives you the kind of isolation that you get with processes or containers, but without the downsides.

Tomorrow’s solution: WebAssembly “nanoprocesses”

WebAssembly can provide the kind of isolation that makes it safe to run untrusted code. We can have an architecture that’s like Unix’s many small processes, or like containers and microservices.

But this isolation is much lighter weight, and the communication between them isn’t much slower than a regular function call.

This means you can use them to wrap a single WebAssembly module instance, or a small collection of module instances that want to share things like memory among themselves.

Plus, you don’t have to give up the nice programming language affordances—like function signatures and static type checking.

Two heavyweight processes connected by slow pipes next to two small nanoprocesses connected with a small slot

So how does this work? What about WebAssembly makes this possible?

First, there’s the fact that each WebAssembly module is sandboxed by default.

By default, the module doesn’t have access to APIs and system calls. If you want the module to be able to interact with anything outside of the module, you have to explicitly provide the module with the function or syscall. Then the module can call it.

WebAssembly engine passing a single syscall into a nanoprocess

Second, there’s the memory model.

Unlike a normal binary compiled directly to something like x86, a WebAssembly module doesn’t have access to all of the memory in its process. It only has access to the chunk of memory that has been assigned to it.

In theory, scripting languages would also provide this kind of isolation. Code in scripting languages can’t directly access the memory in the process. It can only access memory through the variables it has in scope.

But in most scripting language ecosystems, code makes a lot of use of a shared global object. That’s effectively the same as shared memory. So the conventions in the ecosystem make memory isolation a problem for scripting languages as well.

WebAssembly could have had this problem. In the early days, some wanted to establish a convention of passing a shared memory in to every module. But the community group opted for the more secure convention of keeping memory encapsulated by default.

This gives us memory isolation between the two modules. That means that a malicious module can’t mess with the memory of its parent module.

A nanoprocess containing a memory object that is a subset of the process's memory

But how do we share data from our module? You have to pass it in or out as the values of a function call.

There’s a problem here, though. By default, WebAssembly only has a handful of numeric types, which means you can only pass single digits across.

Two nanoprocesses passing numbers to each other, but can't pass more complex data in memory

Here’s where the third feature comes in—the interface type proposal which we demoed in August. With interface types, modules can communicate using more complex values—things like like strings, sequences, records, variants, and nested combinations of these.

That makes it easy for two modules to exchange data, but in a way that’s secure and fast. The WebAssembly engine can do direct copies between the caller and the callee’s memories, without having to serialize and deserialize the data. And this works even if the two modules aren’t compiled from the same language.

One module in a nanoprocess asking the engine to pass a string from its memory over to the other nanoprocess

So that’s how we ensure that a malicious module can’t mess with the memory of other modules in the application.

But we don’t just need to take precautions around how memory is handled between these modules. Because if the application is actually going to be able to do anything with that data, it is going to need to call APIs or system calls. And those APIs or system calls might have access to shared resources, like the file system. And as we talked about in a previous post, the way that most operating systems handle access to the file system really falls down in providing the security we need here.

So we need APIs and system calls that actually have the concept of permissions baked into them, so that they can give different modules different permissions to different resources.

This is where the fourth feature comes in: WASI, the WebAssembly system interface.

That gives us a way to isolate these different modules from each other and give them fine-grained permissions to particular parts of the file system and other resources, and also fine grained permissions for different system calls.

System resources with locks and chains around them

So with this, we’ve taken control of the keys.

What’s missing?

Right now, we don’t have a way to pass these keys down through the dependency tree. We need a way for parent modules to give these keys to their dependencies. This way, they can give their dependencies exactly the keys they need and no other ones. And then, those dependencies can do the same thing for their children, all the way down through the tree.

That’s what we’ll be working on next. In technical terms, we’re planning to use a fine grained form of per-module virtualization. This is an idea that researchers have demonstrated in research contexts, and we’re working on bringing this to WebAssembly.

A parent nanoprocess with two children. The parent is passing keys for the open syscall and a particular directory to one, and the getrandom syscall to the other

Taken all together, these features make it possible for us to have similar isolation to that of a process, but with much lower overhead. This pattern of usage is what we’re calling a WebAssembly nanoprocess.

It’s still just WebAssembly, but it follows a particular pattern. And if we build this pattern into the tools and conventions we use, we can make third-party code reuse safe in WebAssembly in a way it hasn’t been in other ecosystems to date.

Sidenote: At the moment, every nanoprocess is made up of exactly one wasm module. In the future, we’ll focus on toolchain support for creating nanoprocesses containing multiple wasm modules, allowing native-style dynamic linking while preserving the memory isolation of nanoprocesses

With these foundations in place, developers will be able to build dependency trees of isolated packages, each with their own tiny little nanoprocess around them.

How do nanoprocesses protect users?

So how does this help us keep users safe?

In both of these cases, it’s the fact that we’re following the principle of least authority that’s keeping us secure. Let’s walk through how this helps.

Malicious code

In order to carry out an attack, a module often needs access to resources and APIs that it wouldn’t otherwise need access to. With our approach, the module has to declare that it needs access to these things. That makes it easy to see when it’s asking to do things it shouldn’t.

For example, the wallet stealing module needed access to both a socket pointing to the server that it was sending the seed to, and the ability to open a network connection.

But this module was just supposed to take the text that was passed in to it and hand it off to the system, so that the system could display it. Why would the module need to open a network connection to a remote server to do that?

If electron-native-notify had asked for these permissions from the start, that would’ve been a serious red flag. The maintainers of EasyDEX-GUI probably wouldn’t have accepted it into their dependency tree.

A malicious module asking a direct dependency of the target app to join the dependency tree. The malicious module says 'Hey, I want to join you! All I need is access to a socket and the open syscall' and the direct dependency responds 'Wait, why? I dont even have that. I would need to ask the app for it... and I'm not gonna do that'

If the malicious maintainer had tried to slip these access requests in later, sometime after it was already in EasyDEX-GUI, then that would’ve been a breaking change. The module’s signature would have changed, and WebAssembly throws an error when the calling code doesn’t provide the imports or parameters that a module expects.

This means there’s no way to sneak in access to a new resource or system call under the radar.

A malicious module that is in the dependency tree asking for an upgrade in permissions and its parent module saying 'That doesnt make sense. Let me look at those changes'

So it’s unlikely that the maintainer could get the basic access they needed to communicate with the server. But even if this maintainer were somehow able to trick the app developer into granting them this access, they still would have an extremely hard time getting the seed.

That’s because the seed would be in another module’s memory. There are two ways for a module to get access to something from some other module’s memory. And neither of them allow the malicious module to be sneaky about it.

  1. The module that has the seed exports the seed variable, making it automatically visible to all other modules in the app.
  2. The module that has the seed passes it directly to a function in the malicious module as a parameter.

Both seem incredibly unlikely for a variable this sensitive.

A malicious dependency thinking to itself 'Dang it! How do I get the seed?' because the seed is in the memory of a different nanoprocess

There is unfortunately one less straightforward way that attackers can access another module’s memory—side-channel attacks like Spectre. OS processes attempt to provide something called time protection, and this helps protect against these attacks. Unfortunately, CPUs and OSes don’t offer finer-than-process granularity time protection features.

There’s a good chance you’ve never thought about using processes to protect your code. Why don’t most people have to care? Because hosting providers take care of it, sometimes using complex techniques.

If you are architecting your code with isolated process now, you may still need to. Making a shift to nanoprocesses here would take careful analysis.

But there are lots of situations where timing protection isn’t needed, or where people aren’t using processes at all right now. These are good candidates for nanoprocesses.

Plus, as CPUs evolve to provide cheaper time protection features, WebAssembly nanoprocesses will be in a good position to quickly take advantage of these features, at which point you won’t need to use OS processes for this.

Vulnerable code

How does this protect against attackers exploiting vulnerabilities?

Just as with malicious modules, it’s less likely that a vulnerable module legitimately needs the combination of access rights that the attacker needs.

In our ZipSlip example, even for legitimate uses, the vulnerable module does need access to a dangerous syscall—the write syscall which allows it to write files.

But it doesn’t need access to the node_modules directory for any legitimate use. It would only have access to the directory that its parent passes to it, which would be some directory that zip files can be unarchived into.

That makes it impossible for the vulnerable module to carry out the attackers wishes (unless the parent knowingly passes a very sensitive directory into to the vulnerable module).

A dependency with the ZipSlip vulnerability talking on the phone to an attacker saying 'Oh, I am sorry. I dont have access to that folder. My parent only gave me a handle for the uploads directory'

As noted in the caveat above, it is possible that the application developer would pass in a sensitive directory, like the root directory. But for this to work, this mistake would have to be made in the top level package. It can’t happen in a transitive dependency. By making all of these accesses explict, it makes it much easier to catch those kinds of sloppy mistakes in review.

Other benefits of nanoprocesses

With nanoprocesses, we can continue to build massively modular applications… we can keep the developer productivity gains while also ensuring that our users are safe.

But what else can nanoprocesses help with?

Isolation that fits anywhere

These nanoprocesses—these little container-like things—can fit in all sorts of places that regular processes and containers and VMs can’t go.

This is what makes it possible for us to wrap each package or groups of packages into these tiny little micro processes. Because they’re so small, they won’t blow up the size of your application. But dependency trees aren’t the only places where you would use these nanoprocesses.

Here are just a couple of other use cases we see for them, and which partners in the Alliance are working on:

Handling requests for tens of thousands of customers on the same machine

Fastly serves 13% of the Internet’s traffic. That means responding to millions of requests, and trying to do it with as little overhead as possible while keeping customers secure.

They’ve come up with an innovative architecture using WebAssembly nanoprocesses which makes it possible to securely host tens of thousands of simultaneously running programs in the same process. Their approach completely isolates the request from previous requests, ensuring full VM isolation. It has the added advantage of having cold start that’s orders of magnitude faster.

Software fault isolation for individual libraries in native applications

In some applications, most of your code is written in-house, and only a few libraries are written by untrusted developers outside of the organization.

In these cases, you can use a nanoprocess to create a lightweight, in-process sandbox for a single library, rather than having your full dependency tree wrapped in nanoprocesses.

We’ve started building this kind of sandboxing in Firefox to protect users from bugs in third party libraries, like font rendering engines, and image and audio decoders.

Improved software composability

For decades, software development best practices have been driving towards more and more componentized architectures, built out of small, Lego-like pieces.

This started with libraries and modules described above, running in the same process in the same language. More recently, there’s been a progression towards services, and then microservices, which give you the security benefits of isolation.

But beyond security, the isolation services provide also makes software more composable. It means that you can plug together services that are written in different languages, or different versions of the same language. That gives you a much larger set of lego blocks to play with.

But these services can’t go all the places that libraries can go because they’re too big. They are often running inside a process, which is running inside of a container, which is running on a server. This means that you often have to use a coarse-grained approach when breaking your app apart into these services.

With wasm, we can replace microservices with nanoprocesses and get the same security and language independence benefits. It gives us the composability of microservices without the weight. This means we can use a microservices-style architecture and the language interoperability that provides, but with a finer-grained approach to defining the component services.

A dependency tree of nanoprocesses, each containing a module written in a different language

Join us

So that’s the vision we have for a safer future for our users. We believe that WebAssembly doesn’t just have the opportunity but the responsibility to provide this kind of safety.

And we hope that you’ll join us!

The post Announcing the Bytecode Alliance: Building a secure by default, composable future for WebAssembly appeared first on Mozilla Hacks – the Web developer blog.

Mozilla Addons Blog: Extensions in Firefox 71

Firefox 71 is a light release in terms of extension changes. I’d like to tell you about a few interesting improvements nevertheless.

Thanks to Nils Maier, there have been various improvements to the downloads API, specifically in handling download failures. In addition to previously reported failures, the browser.downloads.download API will now report an error in case of various 4xx error codes. Similarly, HTTP 204 (No Content) and HTTP 205 (Reset Content) are now treated as bad content errors. This makes the API more compatible with Chrome and gives developers a way to handle these errors in their code. With the new allowHttpErrors parameter, extensions may also ignore some http errors when downloading. This will allow them to download the contents of server error pages.

Please also note, the lowercase isarticle filter for the tabs.onUpdated listener has been removed in Firefox 71. Developers should instead use the camelCase isArticle filter.

A few more minor updates are available as well:

  • Popup windows now include the extension name instead of its moz-extension:// url when using the windows.create API.
  • Clearer messaging when encountering unexpected values in manifest.json (they are often warnings, not errors)
  • Extension-registered devtools panels now interact better with screen readers

Thank you contributors Nils and Myeongjun Go for the improvements, as well as our WebExtensions team for fixing various tests and backend related issues. If you’d like to help make this list longer for the next releases, please take a look at our wiki on how to contribute. I’m looking forward to seeing what Firefox 72 will bring!

The post Extensions in Firefox 71 appeared first on Mozilla Add-ons Blog.

Mozilla Addons Blog: Security improvements in AMO upload tools

We are making some changes to the submission flow for all add-ons (both AMO- and self-hosted) to improve our ability to detect malicious activity.

These changes, which will go into effect later this month, will introduce a small delay in automatic approval for all submissions. The delay can be as short as a few minutes, but may take longer depending on the add-on file.

If you use a version of web-ext older than 3.2.1, or a custom script that connects to AMO’s upload API, this new delay in automatic approval will likely cause a timeout error. This does not mean your upload failed; the submission will still go through and be approved shortly after the timeout notification. Your experience using these tools should remain the same otherwise.

You can prevent the timeout error from being triggered by updating web-ext or your custom scripts before this change goes live. We recommend making these updates this week.

  • For web-ext: update to web-ext version 3.2.1, which has a longer default timeout for `web-ext sign`. To update your global install, use the command `npm install -g web-ext`.
  • For custom scripts that use the AMO upload API: make sure your upload scripts account for potentially longer delays before the signed file is available. We recommend allowing up to 15 minutes.

The post Security improvements in AMO upload tools appeared first on Mozilla Add-ons Blog.

David Humphrey: Hacktoberfest 2019

I’ve been marking student submissions in my open source course this weekend, and with only a half-dozen more to do, the procrastinator in me decided a blog post was in order.

Once again I’ve asked my students to participate in Hacktoberfest.  I wrote about the experience last year, and wanted to give an update on how it went this time.

I layer a few extra requirements on the students, some of them to deal with things I’ve learned in the past.  For one, I ask them to set some personal goals for the month, and look at each pull request as a chance to progress toward achieving these goals.  The students are quite different from one another, which I want to celebrate, and this lets them go in different directions, and move at different paces.

Here are some examples of the goals I heard this time around:

  • Finish all the required PRs
  • Increase confidence in myself as a developer
  • Master git/GitHub
  • Learn new languages and technologies (Rust, Python, React, etc)
  • Contribute to projects we use and enjoy on a daily basis (e.g., VSCode)
  • Contribute to some bigger projects (e.g., Mozilla)
  • Add more experience to our resume
  • Read other people’s code, and get better at understanding new code
  • Work on projects used around the world
  • Work on projects used locally
  • Learn more about how big projects do testing

So how did it go?  First, the numbers:

  • 62 students completed all 4 PRs during the month (95% completion rate)
  • 246 Pull Requests were made, consisting of 647 commits to 881 files
  • 32K lines of code were added or modified

I’m always interested in the languages they choose.  I let them work on any open source projects, so given this freedom, how will they use it?  The most popular languages by pull request ere:

  • JavaScript/TypeScript – 50%
  • HTML/CSS – 11%
  • C/C++/C# – 11%
  • Python – 10%
  • Java – 5%

Web technology projects dominate GitHub, and it’s interesting to see that this is not entirely out of sync with GitHub’s own stats on language positions.  As always, the long-tail provides interesting info as well.  A lot of people worked on bugs in languages they didn’t know previously, including:

Swift, PHP, Go, Rust, OCaml, PowerShell, Ruby, Elixir, Kotlin

Because I ask the students to “progress” with the complexity and involvement of their pull requests, I had fewer people working in “Hacktoberfest” style repos (projects that popup for October, and quickly vanish).  Instead, many students found their way into larger and well known repositories and organizations, including:

Polymer, Bitcoin, Angular, Ethereum, VSCode, Microsoft Calculator, React Native for Windows, Microsoft STL, Jest, WordPress, node.js, Nasa, Mozilla, Home Assistant, Google, Instacart

The top GitHub organization by pull request volume was Microsoft.  Students worked on many Microsoft projects, which is interesting, since they didn’t coordinate their efforts.  It turns out that Microsoft has a lot of open source these days.

When we were done, I asked the students to reflect on the process a bit, and answer a few questions.  Here’s what I heard.

1. What are you proud of?  What did you accomplish during October?

  • Contributing to big projects (e.g., Microsoft STL, Nasa, Rust)
  • Contributing to small projects, who really needed my help
  • Learning a new language (e.g., Python)
  • Having PRs merged into projects we respect
  • Translation work — using my personal skills to help a project
  • Seeing our work get shipped in a product we use
  • Learning new tech (e.g., complex dev environments, creating browser extensions)
  • Successfully contributing to a huge code base
  • Getting involved in open source communities
  • Overcoming the intimidation of getting involved

2. What surprised you about Open Source?  How was it different than you expected?

  • People in the community were much nicer than I expected
  • I expected more documentation, it was lacking
  • The range of projects: big companies, but also individuals and small communities
  • People spent time commenting on, reviewing, and helping with our PRs
  • People responded faster than we anticipated
  • At the same time, we also found that some projects never bothered to respond
  • Surprised to learn that everything I use has some amount of open source in it
  • Surprised at how many cool projects there are, so many that I don’t know about
  • Even on small issues, lead contributors will get involved in helping (e.g., 7 reviews in a node.js fix)
  • Surprised at how unhelpful the “Hacktoberfest” label is in general
  • “Good First Issue” doesn’t mean it will be easy.  People have different standards for what this means
  • Lots of things on GitHub are inactive, be careful you don’t waste your time
  • Projects have very different standards from one to the next, in terms of process, how professional they are, etc.
  • Surprised to see some of the hacks even really big projects use
  • Surprised how willing people were to let us get involved in their projects
  • Lots of camaraderie between devs in the community

3. What advice would you give yourself for next time?

  • Start small, progress from there
  • Manage your time well, it takes way longer than you think
  • Learn how to use GitHub’s Advanced Search well
  • Make use of your peers, ask for help
  • Less time looking for a perfect issue, more time fixing a good-enough issue
  • Don’t rely on the Hacktoberfest label alone.
  • Don’t be afraid to fail.  Even if a PR doesn’t work, you’ll learn a lot in the process
  • Pick issues in projects you are interested in, since it takes so much time
  • Don’t be afraid to work on things you don’t (yet) know.  You can learn a lot more than you think.
  • Read the contributing docs, and save yourself time and mistakes
  • Run and test code locally before you push
  • Don’t be too picky with what you work on, just get involved
  • Look at previously closed PRs in a project for ideas on how to solve your own.

One thing that was new for me this time around was seeing students get involved in repos and projects that didn’t use English as their primary language.  I’ve had lots of students do localization in projects before.  But this time, I saw quite a few students working in languages other than English in issues and pull requests.  This is something I’ve been expecting to see for a while, especially with GitHub’s Trending page so often featuring projects not in English.  But it was the first time it happened organically with my own students.

Once again, I’m grateful to the Hacktoberfest organizers, and to the hundreds of maintainers we encountered as we made our way across GitHub during October.  When you’ve been doing open source a long time, and work in git/GitHub everyday, it can be hard to remember what it’s like to begin.  Because I continually return to the place where people start, I know first-hand how valuable it is to be given the chance to get involved, for people to acknowledge and accept your work, and for people to see that it’s possible to contribute.

Ryan Harter: Technical Leadership Paths

I found
this article
a few weeks ago and I really enjoyed the read.
The author outlines what a role can look like for very senior ICs.
It’s the first in a (yet to be written) series about technical leadership
and long term IC career paths.
I’m excited to read more!

In particular, I am delighted to see her call out strategic work
as a way for a senior IC to deliver value.
I think there’s a lot of opportunity for senior ICs to deliver strategic work,
but in my experience organizations tend to under-value this type of work
(often unintentionally).

My favorite projects to work on are high impact and difficult to execute
even if there not deeply technical.
In fact, I’ve found that my most impactful projects
tend to only have a small technical component.
Instead, the real value tends to come from
spanning a few different technical areas, tackling some cultural change,
or taking time to deeply understand the problem before throwing a solution at it.
Framing these projects as “strategic” help me
put my thumb on the type of work I like doing.

Keavy also calls out strike teams as a valuable way for ICs
to work on high impact projects without moving into management.
In my last three years at Mozilla,
I’ve been fortunate to be a part of several strike teams
and upon reflection I find that these are the projects I’m most proud of.

I’m fortunate that Mozilla has a well documented growth path for senior ICs.
All the same, I am learning a lot from her framing.
I’m excited to read more!

The Rust Programming Language Blog: Announcing Rust 1.39.0

The Rust team is happy to announce a new version of Rust, 1.39.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.39.0 is as easy as:

rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.39.0 on GitHub.

What’s in 1.39.0 stable

The highlights of Rust 1.39.0 include async/.await, shared references to by-move bindings in match guards, and attributes on function parameters. Also, see the detailed release notes for additional information.

The .await is over, async fns are here

Previously in Rust 1.36.0, we announced that the Future trait is here. Back then, we noted that:

With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we’ll tell you more about in the future.

A promise made is a promise kept. So in Rust 1.39.0, we are pleased to announce that async / .await is stabilized! Concretely, this means that you can define async functions and blocks and .await them.

An async function, which you can introduce by writing async fn instead of fn, does nothing other than to return a Future when called. This Future is a suspended computation which you can drive to completion by .awaiting it. Besides async fn, async { ... } and async move { ... } blocks, which act like closures, can be used to define “async literals”.

For more on the release of async / .await, read Niko Matsakis’s blog post.

References to by-move bindings in match guards

When pattern matching in Rust, a variable, also known as a “binding”, can be bound in the following ways:

  • by-reference, either immutably or mutably. This can be achieved explicitly e.g. through ref my_var or ref mut my_var respectively. Most of the time though, the binding mode will be inferred automatically.

  • by-value — either by-copy, when the bound variable’s type implements Copy, or otherwise by-move.

Previously, Rust would forbid taking shared references to by-move bindings in the if guards of match expressions. This meant that the following code would be rejected:

fn main() {
    let array: Box<[u8; 4]> = Box::new([1, 2, 3, 4]);

    match array {
        nums
//      ---- `nums` is bound by move.
            if nums.iter().sum::() == 10
//                 ^------ `.iter()` implicitly takes a reference to `nums`.
        => {
            drop(nums);
//          ----------- `nums` was bound by move and so we have ownership.
        }
        _ => unreachable!(),
    }
}

With Rust 1.39.0, the snippet above is now accepted by the compiler. We hope that this will give a smoother and more consistent experience with match expressions overall.

Attributes on function parameters

With Rust 1.39.0, attributes are now allowed on parameters of functions, closures, and function pointers. Whereas before, you might have written:

#[cfg(windows)]
fn len(slice: &[u16]) -> usize {
    slice.len()
}
#[cfg(not(windows))] 
fn len(slice: &[u8]) -> usize {
    slice.len()
}

you can now, more succinctly, write:

fn len(
    #[cfg(windows)] slice: &[u16], // This parameter is used on Windows.
    #[cfg(not(windows))] slice: &[u8], // Elsewhere, this one is used.
) -> usize {
    slice.len()
}

The attributes you can use in this position include:

  1. Conditional compilation: cfg and cfg_attr

  2. Controlling lints: allow, warn, deny, and forbid

  3. Helper attributes used by procedural macro attributes applied to items.

    Our hope is that this will be used to provide more readable and ergonomic macro-based DSLs throughout the ecosystem.

Borrow check migration warnings are hard errors in Rust 2018

In the 1.35.0 release, we announced that NLL had come to Rust 2015 after first being released for Rust 2018 in 1.31.

As noted in the 1.35.0 release, the old borrow checker had some bugs which would allow memory unsafety. These bugs were fixed by the NLL borrow checker. As these fixes broke some stable code, we decided to gradually phase in the errors by checking if the old borrow checker would accept the program and the NLL checker would reject it. If so, the errors would instead become warnings.

With Rust 1.39.0, these warnings are now errors in Rust 2018.
In the next release, Rust 1.40.0, this will also apply to Rust 2015, which will finally allow us to remove the old borrow checker, and keep the compiler clean.

If you are affected, or want to hear more, read Niko Matsakis’s blog post.

More const fns in the standard library

With Rust 1.39.0, the following functions became const fn:

Additions to the standard library

In Rust 1.39.0 the following functions were stabilized:

Other changes

There are other changes in the Rust 1.39.0 release: check out what changed in Rust, Cargo, and Clippy.

Please also see the compatibility notes to check if you’re affected by those changes.

Contributors to 1.39.0

Many people came together to create Rust 1.39.0. We couldn’t have done it
without all of you. Thanks!

The Rust Programming Language Blog: Async-await on stable Rust!

On this coming Thursday, November 7, async-await syntax hits stable
Rust, as part of the 1.39.0 release.
This work has been a long time
in development — the key ideas for zero-cost futures, for example,
were first proposed by Aaron Turon and Alex Crichton in
2016
! — and we are very proud of the end result. We believe
that Async I/O is going to be an increasingly important part of Rust’s
story.

While this first release of “async-await” is a momentous event, it’s
also only the beginning. The current support for async-await marks a
kind of “Minimum Viable Product” (MVP). We expect to be polishing,
improving, and extending it for some time.

Already, in the time since async-await hit beta, we’ve made
a lot of great progress, including making some key diagnostic
improvements
that help to make async-await errors far more
approachable. To get involved in that work, check out
the Async Foundations Working Group; if nothing else, you can
help us by filing bugs about polish issues or by nominating those
bugs that are bothering you the most
, to help direct our
efforts.

Many thanks are due to the people who made async-await a reality. The
implementation and design would never have happened without the
leadership of cramertj and withoutboats, the implementation and polish
work from the compiler side (davidtwco, tmandry, gilescope, csmoe),
the core generator support that futures builds on (Zoxc), the
foundational work on Future and the Pin APIs (aturon,
alexcrichton, RalfJ, pythonesque), and of course the input provided by
so many community members on RFC threads and discussions.

Major developments in the async ecosystem

Now that async-await is approaching stabilization, all the major Async
I/O runtimes are at work adding and extending their support for the
new syntax:

Async-await: a quick primer

(This section and the next are reproduced from the “Async-await hits
beta!”
post.)

So, what is async await? Async-await is a way to write functions that
can “pause”, return control to the runtime, and then pick up from
where they left off. Typically those pauses are to wait for I/O, but
there can be any number of uses.

You may be familiar with the async-await from other languages, such as
JavaScript or C#. Rust’s version of the feature is similar, but with a
few key differences.

To use async-await, you start by writing async fn instead of fn:

async fn first_function() -> u32 { .. }

Unlike a regular function, calling an async fn doesn’t have any
immediate effect. Instead, it returns a Future. This is a suspended
computation that is waiting to be executed. To actually execute the
future, use the .await operator:

async fn another_function() {
    // Create the future:
    let future = first_function();
    
    // Await the future, which will execute it (and suspend
    // this function if we encounter a need to wait for I/O): 
    let result: u32 = future.await;
    ...
}

This example shows the first difference between Rust and other
languages: we write future.await instead of await future. This
syntax integrates better with Rust’s ? operator for propagating
errors (which, after all, are very common in I/O). You can simply
write future.await? to await the result of a future and propagate
errors. It also has the advantage of making method chaining painless.

Zero-cost futures

The other difference between Rust futures and futures in other
languages is that they are based on a “poll” model, which makes them
zero cost. In other languages, invoking an async function
immediately creates a future and schedules it for execution: awaiting
the future isn’t necessary for it to execute. But this implies some
overhead for each future that is created.

In contrast, in Rust, calling an async function does not do any
scheduling in and of itself, which means that we can compose a complex
nest of futures without incurring a per-future cost. As an end-user,
though, the main thing you’ll notice is that futures feel “lazy”:
they don’t do anything until you await them.

If you’d like a closer look at how futures work under the hood, take a
look at the executor section of the async book, or watch the
excellent talk that withoutboats gave at Rust LATAM 2019
on the topic.

Summary

We believe that having async-await on stable Rust is going to be a key
enabler for a lot of new and exciting developments in Rust. If you’ve
tried Async I/O in Rust in the past and had problems — particularly
if you tried the combinator-based futures of the past — you’ll find
async-await integrates much better with Rust’s borrowing
system
. Moreover, there are now a number of great runtimes and
other libraries available in the ecosystem to work with. So get out
there and build stuff!

Daniel Stenberg: curl 7.67.0

There has been 56 days since curl 7.66.0 was released. Here comes 7.67.0!

This might not be a release with any significant bells or whistles that will make us recall this date in the future when looking back, but it is still another steady step along the way and thanks to the new things introduced, we still bump the minor version number. Enjoy!

As always, download curl from curl.haxx.se.

If you need excellent commercial support for whatever you do with curl. Contact us at wolfSSL.

Numbers

the 186th release
3 changes
56 days (total: 7,901)

125 bug fixes (total: 5,472)
212 commits (total: 24,931)
1 new public libcurl function (total: 81)
0 new curl_easy_setopt() option (total: 269)

1 new curl command line option (total: 226)
68 contributors, 42 new (total: 2,056)
42 authors, 26 new (total: 744)
0 security fixes (total: 92)
0 USD paid in Bug Bounties

The 3 changes

Disable progress meter

Since virtually forever you’ve been able to tell curl to “shut up” with -s. The long version of that is --silent. Silent makes the curl tool disable the progress meter and all other verbose output.

Starting now, you can use --no-progress-meter, which in a more granular way only disables the progress meter and lets the other verbose outputs remain.

CURLMOPT_MAX_CONCURRENT_STREAMS

When doing HTTP/2 using curl and multiple streams over a single connection, you can now also set the number of parallel streams you’d like to use which will be communicated to the server. The idea is that this option should be possible to use for HTTP/3 as well going forward, but due to the early days there it doesn’t yet.

CURLU_NO_AUTHORITY

This is a new flag that the URL parser API supports. It informs the parser that even if it doesn’t recognize the URL scheme it should still allow it to not have an authority part (like host name).

Bug-fixes

Here are some interesting bug-fixes done for this release. Check out the changelog for the full list.

Winbuild build error

The winbuild setup to build with MSVC with nmake shipped in 7.66.0 with a flaw that made it fail. We had added the vssh directory but not adjusted these builds scripts for that. The fix was of course very simple.

We have since added several winbuild builds to the CI to make sure we catch these kinds of mistakes earlier and better in the future.

FTP: optimized CWD handling

At least two landed bug-fixes make curl avoid issuing superfluous CWD commands (FTP lingo for “cd” or change directory) thereby reducing latency.

HTTP/3

Several fixes improved HTTP/3 handling. It builds on Windows better, the ngtcp2 backend now also behaves correctly on macOS, the build instructions are clearer.

Mimics socketpair on Windows

Thanks to the new socketpair look-alike function, libcurl now provides a socket for the application to wait for even when doing name resolves in the dedicated resolver thread. This makes the Windows code work catch up with the similar change that landed in 7.66.0. This makes it easier for applications to behave correctly during the short time gaps when libcurl resolves a host name and nothing else is happening.

curl with lots of URLs

With the introduction of parallel transfers in 7.66.0, we changed how curl allocated handles and setup transfers ahead of time. This made command lines that for example would use [1-1000000] ranges create a million CURL handles and thus use a lot of memory.

It did in fact break a few existing use cases where people did very large ranges with curl. Starting now, curl will just create enough curl handles ahead of time to allow the maximum amount of parallelism requested and users should yet again be able to specify ranges with many million iterations.

curl -d@ was slow

It was discovered that if you ask curl to post data with -d @filename, that operation was unnecessary slow for large files and was sped up significantly.

DoH fixes

Several corrections were made after some initial fuzzing of the DoH code. A benign buffer overflow, a memory leak and more.

HTTP/2 fixes

We relaxed the :authority push promise checks, fixed two cases where libcurl could “forget” a stream after it had delivered all data and dup’ed HTTP/2 handles could issue dummy PRIORITY frames!

connect with ETIMEDOUT now makes CURLE_OPERATION_TIMEDOUT

When libcurl’s connect attempt fails and errno says ETIMEDOUT it means that the underlying TCP connect attempt timed out. This will now be reflected back in the libcurl API as the timed out error code instead of the previously used CURLE_COULDNT_CONNECT.

One of the use cases for this is curl’s --retry option which now considers this situation to be a timeout and thus will consider it fine to retry…

Parsing URL with fragment and question mark

There was a regression in the URL parser that made it mistreat URLs without a query part but with a question mark in the fragment.