Exploring Rust’s Type System: Part 1

Understanding Ownership, Resource Management, Aliasing, Mutation, and the Borrow Checker in the Rust Type System
Theory
Code
Static
Type System
Author

Sanjeevi

Published

August 5, 2023

Table Of Contents

Ownership and Move Semantics

Aliasing and Mutation

Lifetime

Region Based Resource Management

What is a Type System in a Programming Language?

At the lowest level, computers are only concerned with bytes, which are composed of zeros and ones. Bytes lack additional structure, but at a higher level, types provide various interpretations of these bytes based on how we choose to represent specific domains.

Directly interacting with computer hardware is inherently unsafe. Hardware understands only zero or one, with the exception of experimental quantum computers. A type system exists regardless of whether a programming language is statically typed or dynamically typed. The distinction lies in when the types are known. Without types, we can’t effectively communicate our intent to the computer without encountering mistakes at some point or without exercising extreme caution when interacting with the lowest layers of the technology stack (hardware).

A type system is a fundamental concept in computer programming and software engineering. It categorizes values into different types based on their behavior, structure, and usage. The type system enforces rules and constraints on how various types of values can interact, ensuring correctness, safety, and efficiency in a program. It aids in detecting errors during the compilation process and offers a way to understand code behavior without necessarily executing it.

Types make sense only at the abstraction level, which is desirable. For example, the binary representation of the word a is the same as the binary representation of the integer 97. While they appear identical at the CPU level, they are distinguished by their types at the abstraction level. This safeguard protects us from invalid operations, catching type mismatches before generating assembly code (although assembly is typeless, Typed Assembly also exists).

A language is considered strongly typed if it avoids implicit type coercion, ensures values are always initialized before use, prevents wild pointers, and eliminates type confusion. Consequently, strongly typed programs are well-structured and do not encounter type-related issues. Consider the following Rust function as an example:

fn add(x: u8, y: u8) -> u8 {
    x + y
}

Regarding this function, we can make the following observations:

  1. The function accepts two parameters of type u8. It’s important to note that these types must be initialized before being passed into the function. Since the type requirement is u8, it ensures that no negative values are accepted. Additionally, this function doesn’t cause any side effects beyond combining the two u8 values, as Rust functions can’t capture variables outside their own body.

  2. Adding two u8 values will produce another u8 value, adhering to Rust’s strict type system that doesn’t allow implicit conversions.

  3. In the event of an overflow, such as the result exceeding the maximum value of a u8, Rust’s behavior depends on the compilation mode. In debug mode, the program will panic, halting execution and providing a detailed error message. In release mode, overflow is wrapped around, and the program continues executing, potentially leading to unexpected results. This behavior in release mode is intended to improve performance while debugging during development.

Statically typed means that the compiler has information about all of the variables and their types at compile time, and it performs most of its checks at compile time. This leaves very minimal type checking at runtime, such as bound checking and integer overflow handling. Rust supports a type inference system that allows us to omit explicit type annotations in many cases.

The static type system also aids in maintaining large-scale software and adding new features to existing code without breaking other parts of the code. Rust’s type system supports you when changing or updating code, as long as it compiles successfully.

Rust’s type system goes beyond protecting against invalid operations on types. Memory safety and concurrency bugs are also addressed through the type system. This means we can prevent memory errors much like how type mismatch errors are detected before the code is executed. This introduces another dimension to programming paradigms. While there are other languages like Vale, Idris, and Pony that have similar type systems to Rust’s, they have not been as widely adopted.

With the expressive and robust type system, Rust can eliminate or catch more errors at compile time, even logic errors like incomplete case coverage, improper use of integers in control flows and loops, and attempting to write data when a read lock is held. This is why Rust has a steeper learning curve than other languages in common use. However, substantial efforts have been put into improving the ergonomics of the language, providing learning resources, and offering comprehensive documentation.

Other languages with expressive type systems include Swift, Haskell, and OCaml. Some concepts are easy to express in Rust but not in other languages, and vice versa. Choosing the right language depends on the context and trade-offs. For example, opting for plain JavaScript when writing web applications for new projects could lead to more runtime errors. NoRedink uses Elm to build their web app, experiencing minimal or no runtime errors even after long periods of production use. The Elm type system is designed for creating web applications, which simplifies the learning process.”

Single Ownership and Move Semantics :

In Rust, every value (excluding references) owns its data, meaning the owner is responsible for memory cleanup. Unlike languages with garbage collectors, Rust lacks such a mechanism. The ownership rules in Rust combine the benefits of automatic memory management, like that of a garbage collector, with the performance of manual memory management found in languages like C/C++. The compiler handles memory deallocation at a known point, which eliminates concerns about memory cleanup. The abstraction of heap memory allocation and deallocation is transparent to the programmer. The compiler essentially performs the memory management tasks a C++ programmer would manually do, preventing memory leaks and issues like double freeing due to single ownership.

Types marked as “Copy” are implicitly cloned when assigned to a new variable. However, types implementing “Clone” are moved, meaning the original owner loses access to the data and it becomes uninitialized. Rust prevents further use of this variable unless it’s re-initialized after the move. In this context, “move” only refers to the pointer stored on the stack, not the actual heap data. This moving process in Rust is efficient, whether it’s in single-threaded or multi-threaded code. If ownership is transferred to different threads, the value can’t be used in the thread where it was initially created.

Expressing single ownership using linear types or affine types is straightforward and prevents issues like double freeing and use-after-free without requiring runtime checks. Linear types are used exactly once, which might limit a language, whereas affine types are used at most once. Affine types provide the same safety as linear types but offer more practical flexibility for use in languages like Rust to express various patterns.

Here, “value” refers to type T without any & or &mut preceding it.

  • If T is “Copy”, it’s implicitly cloned upon assignment. Since these types reside in memory and lack heap allocation, they have no special behavior when they go out of scope. Copy semantics have no concept of ownership in Rust, achieved using a copy marker type, allowing move and copy semantics to coexist in the language.

  • If T is “Clone”, it’s implicitly moved when assigned. For types implementing the “Drop” trait, the compiler calls the drop function to deallocate memory. It also ensures that no other references can point beyond the scope where they were used. Ownership of the data doesn’t imply the ability to modify the data unless prefixed with mut. Values can’t be used beyond their original scope, preventing use-after-free and dangling memory. The single ownership prevents double freeing. If we attempt to clean up a resource twice, the compiler will generate an error message indicating the “Use of Moved Value”.

Ownership can be transferred through assignment, passing, or returning from functions or closures. Cloning on Clone-only types (move types) creates an independent copy, allowing different variables to own their data independently. When the scope ends, the “Drop” (Destructor) trait is called for every owned type independently, as ownership can’t be aliased. It’s important to note that there is an exception to this rule, which will be covered in part 3 of this series.

fn main() {
    //Drops immediately
    let _ = String::from("Not bind to anything");
    
    //Copy types
    let a =10;
    //implicitly cloned
    let b =a;
    //explicitly cloned
    let c = b.clone();
    //This code wouldn't compile if they were move types
    println!("{a} {b} {c}");

    //Each variable owns its data i.e not aliased
    let uqe_owner1 = vec![14, 5, 78];

    //The explicit clone on Move types will cause dynamic allocation
    let uqe_owner2 = uqe_owner1.clone();

    let mut first = String::from("A type that implements Clone and Drop");
    //First moved to second
    let second = first;
    //here the variable first is uninitialized and statically can't be accessible

    //But the type information is still there so we can initialize again only
    //if it's mutable and used again
    first = "Reinitialized after moved".to_string();

    //both the First and second are owned by the variable vec_of_string
    //Each String is own its data and Vec owns its buffers
    let vec_of_string = vec![
        first,
        second,
        String::from("One"),
        String::from("Two"),
        String::from("Three"),
    ];
    //The if-else expression returns the ownership
    //Conditional Moving, no duplicates
    //If it's true x takes ownership
    let returned = if true{
                 //Ownership is transferred
                     x(vec_of_string)
    }
    //Otherwise y takes ownership
                 else {
                     y(vec_of_string)
    };
    //variable vec_of_string won't reach here
    //println!("{vec_of_string:?}")
    //returned variable has the ownership now
    println!("{returned:?}"); //Value dropped here

}
//Ownership received and returned to the caller
//the signature/type is T not &T or &mut T
fn x(x: Vec<String>) -> Vec<String> {
    x
}
//No value is destroyed in x or y
fn y(y: Vec<String>) -> Vec<String> {
    y
}

Not only does single-threaded code prevent us from using data once it’s moved, but the same move semantics also apply in multi-threaded code.

Borrow Checker and Lifetime:

Single ownership is more restrictive because, when we want to read the data, we have to pass ownership back and forth, even when ownership is not necessary for reading/writing the data. This is where the borrow checker comes into play, relaxing the restrictions of single-owned types to provide more flexibility in their use, similar to pointers in C/C++. However, references in Rust are not exactly the same as pointers in C/C++. Rust references are distinct in several ways: they are always initialized before use, have a size, are never null, follow a restricted aliasing model, and have lifetimes and alignment.

In Rust, there are equivalent concepts to raw pointers in C/C++, represented by *const T for immutable and *mut T for mutable. These are used only within unsafe blocks. Rust references come in two variations:

  1. Immutable Reference - &T
  2. Mutable Reference - &mut T

References, as the name implies, do not own the data they point to. Instead, they provide temporary access to the memory they point to. This approach is efficient when dealing with large heap memory, as it avoids the need to clone data simply for access. A reference only occupies 8 bytes or 64 bits on 64-bit architectures, making it a lightweight option. It’s important to note that not all references are single-word entities; Slices and Trait Objects are examples of fat pointers, taking up two words of memory. This concept aligns with C++.

Rust’s uniqueness shines in its clear distinction between references and the restrictions it places on them. These restrictions help mitigate errors often associated with pointer usage.

Aliasing and Mutation:

Aliasing and mutation can lead to issues in both single-threaded and multi-threaded code. In this section, we’ll focus on single-threaded code. Types that can grow or shrink through mutable operations can result in incorrect reading or writing. There are various ways this can occur in Rust,

//for the purpose of showing that this will grow
    //after pushing more elements than 24
    let mut string = String::with_capacity(24);
    string.push_str("A mutable data structure");
    //storing different regions of data for read-only
    //For single-byte character range integer indexing is okay
    //but for multi-byte characters this will may panic
    let sub_str1 = &string[0..5];
    let sub_str2 = &string[5..];

    //Here the length is 24 since we push
    //elements upto 24
    println!("{}", string.len());

    //pushing more elements to the String data cause
    //them to grow so allocating more space
    //to move all elements to that place
    for char in ('a'..'z').into_iter() {
        string.push(' ');
        string.push(char);
    }

    //This is safe to read the owner
   //Owner is responsible for changing the pointer to the newly allocated data
   //So it should be there.
    println!("{string}");

    //But reading the references is not because
    //the references point to memory where the string initially
    //there but after mutating string may not there
    //So borrow checker forbid this
    //println!("{sub_str1} {sub_str2}");

Simultaneous writing can lead to problems. Having two mutable references to the same data can also cause issues. For instance, imagine we have two mutable references to the same data. If we dereference the first mutable reference after the second reference, it may read or write incorrect data. This is because the second mutable reference might modify the data, causing the owner of the data to allocate more space for it. If this were allowed, the first mutable reference could potentially write to data it’s not supposed to. Due to these potential problems, Rust imposes restrictions on references to ensure the safety of the code.

Immutable Reference:

  • A reference of type &T is considered Copy in the sense that borrowing from another borrower merely duplicates the same permission as a previous borrower. This means we can have multiple immutable references to the data, resulting in freely aliased references. This situation is not problematic because neither a mutable reference can modify the data while we are reading it, nor can an immutable reference write to it.
  • It’s immutable because we cannot alter the data behind an immutable reference.
    //A type of T
    let referent:bool = true;
    //A type of &T
    let borrower1:&bool = &referent;
    //copied
    let borrower2 = borrower1;
    //still create a reference from the referent
    let borrower3 = &referent;

    //Can read any of reference and referent itself
    println!("{referent} {borrower1} {borrower2} {borrower3}");

Mutable Reference:

  • A reference of type &mut T is unique in the sense that we can’t create multiple mutable references just by duplicating the mutable borrow, as is possible with immutable borrows.

  • Mutable references cannot be aliased, similar to the move semantics of ownership. However, unlike ownership, mutable references do not own the data; they grant exclusive permission to modify the owned data.

  • The data owner itself does not have access to the data while mutable references are in existence.

  • This concept is analogous to the ReadWriteLock or XOR pattern in multi-threaded code, with the distinction that it’s statically verified and carries no overhead when used in single-threaded code.

   //A type of T
    let mut referent: bool = true;
    //A type of &mut T
    let borrower1: &mut bool = &mut referent;
    //moved
    let borrower2 = borrower1;
   //borrower1 moved so we can't use it here
    //println!("{borrower1}");

Both mutable and immutable references don’t own the data they point to, so no destructor will run when they go out of scope, only within the scope of their usage. References are more akin to requesting permission to access data; they permit temporary use, and eventually, they must release it when their scope ends. We can have either multiple immutable or a single mutable reference to the data, but we cannot have both mutable and immutable references to the same data. This characteristic prevents deadlocks from occurring in our code, though such deadlocks can be possible with smart pointers.

The distinction between mutable and immutable types and the restriction of either multiple immutability or unique mutability is what prevents Iterator Invalidation (II) statically without leading to undefined behavior as seen in C/C++, runtime exceptions as in Java, or infinite loops as in Python. A function that accepts a mutable reference cannot also accept an immutable reference due to differing signatures. However, the opposite works because Rust automatically coerces mutable references to immutable references, maintaining memory safety without violating it.

These restrictions make our code easier to reason about and also enable the compiler to optimize the code more effectively, as the absence of unrestricted aliasing allows the compiler to make assumptions that would otherwise lead to incorrect usage in the presence of such aliasing.

While the restrictions on references prevent certain bugs from occurring, they also cause the borrow checker to disallow code patterns that are actually safe. The introduction of Non-Lexical Lifetime (NLL) improves this situation. The RwLock pattern might be overkill for primitive types like arrays, integers, and even slices in single-threaded code, yet it still poses challenges in multi-threaded code. This is because these types don’t dynamically grow or shrink, but they can be mutated in a manner that doesn’t result in issues when compared to other dynamically mutable data. It’s noteworthy that only mutable and growable types can lead to problems of concurrent aliasing/mutation in single-threaded code.

     //A copy type 
    let mut a:i32 = 10;
    let ref1 = &a;
    let ref2 = &a;
    let mut_ref1 = &mut a;
    let mut_ref2 = &mut a;
    //we can't previous mutable borrow to modify
    *mut_ref1+=1; 
    //nor access read-only references
    //Even though it's completely memory safe
    println!("{} {}",ref1,ref2);

Unfortunately, Rust doesn’t differentiate between a borrow checker for mutable, non-growable types and mutable, growable types in the same way that Ownership types are not applied to copy types. However, these restrictions have been alleviated through the use of Cell types, rather than complicating the borrow checker.

When you create a mutable reference to collections, the borrow checker assumes that you are creating a mutable reference to the entire dataset, even when this might not be the case. Instead of resorting to unsafe blocks to clarify our intentions, we can utilize methods that are defined on those types to safely mutate them. Collections, such as slices, provide methods for safely mutating non-overlapping regions independently.

   let mut vector = vec![1,3,5,67,78,9];
   //borrowing non-overlapping regions
   let mut_ref_to_1 = &mut vector[0];
   let mut_ref_to_2 = &mut vector[5];
// This is safe but borrow a checker forbid this
   //*mut_ref_to_1+=10;          
   //*mut_ref_to_2+=23;
   
   //Mutate different elements through methods
   //Instead of directly
   if let Some(last) = vector.last_mut(){
         *last=11;
    }
    if let Some(first) = vector.first_mut(){
        *first=10;
    }

Lifetimes and Scopes to Validate References:

A lifetime, as the name suggests, is used to determine how long a reference can live and remain valid. Lifetimes in Rust only exist as compile-time annotations, as Rust does not have a garbage collector. When a reference is created, the compiler implicitly tags it with a lifetime.

Different references receive different lifetime tags, and references of references and beyond depend on other lifetimes. The scope refers to the region where a reference or referent is created. Due to these factors, the order of where something is defined and used becomes significant.

Lifetimes are not only associated with references (types like &T and &mut T) but also with the referent (type T). Dynamically allocated data or even stack-allocated data created outside of the global scope have a shorter lifetime, limited to the scope in which they were created. Only types declared as static or const have a static lifetime. However, there are restrictions in place. We cannot use dynamic data structures or mutable references in a static or const context, and const values cannot be mutable at all. Non-const types can be used with lazy initialization provided by the standard library or libraries from crates.io. A longer lifetime can be coerced into a shorter lifetime, but the opposite is not possible. When creating a string, it initially has a static lifetime, which becomes limited when we make a reference to it within a specific scope.

Lifetimes are declared using apostrophes followed by lowercase letters, such as 'a, 'env, 'lifetime in the generic context of fn, impl blocks, struct, enum, and traits with associated types. The static lifetime is declared as 'static. It’s not possible to have a type of 'static T if T is non-const (not evaluated at compile time), but &'static T and &'static mut T are valid. The static lifetime implies that data lives as long as the program runs, but its accessibility depends on the scope in which it was created. If defined at the global scope, it can be accessed throughout the program, while if defined inside a function body like in the main function, it can’t be referred to beyond the scope of that function. However, the data is still present in the binary; it’s just not accessible beyond its defined scope.

    use std::collections::HashMap;
    let mut hashmap = HashMap::new();
    {
        //static created inside the lexical scope
        static VALUE: i32 = 10;
        hashmap.insert("Key", &VALUE);
        // or
        // hashmap.insert("Key",VALUE);
    }
    //The static VALUE can't be accessed here but
    //through hashmap
    println!("{}", hashmap.get("Key").unwrap());

No reference can outlive its referent; this is statically validated using lifetimes. The referent must outlive the reference; otherwise, we may have dangling pointers. When the referent is destroyed, accessing the reference would be invalid.

   {
     //Both referent and reference created in this scope 
    // Thus destroyed in this scope too
       let vector = vec![56,89,34];
       let reference = &vector;
       println!("{reference:?}");
       
  //Here both vector and reference freed
    }  
   //This is Use After Free for both
   //variable reference and referent vec
   //println!("{vector} {reference}");

``
```rust
  //vector created here
   let vector; //'a
    {  
     //Vector initialized in the inner scope
     //but still the same lifetime as the outer scope
       vector = vec![1]; 
      //But reference created in the inner scope
      //thus can't exist beyond the scope
       let reference = &vector; 
   }
   //Vector still accessible here 
   println!("{vector:?}");
   //but references don't
   println!("{reference}");
    let referent = vec![1]; 
    //even though the reference is defined in the outer scope
    //but it references the data that created inside in this 
    //scope so we can't use beyond this scope.
    reference = &referent; 
    }
    
    // The variable reference is invalidated when 
    //above scope is ended so we can't use it here
    
    //println!("{reference:?}");
    
     //The type of variable reference is preserved when initialized 
     //Reference variable initialized in above scope meaning 
    //we can't initialize other than &vec<i32> in outer scope
    //reference =10;
    
    // But we can shadow with different types using let
    let reference = 12;
let mut referent = vec![1];
let reference = &mut referent;
 {
     //we can read either referent or reference or in other words
     //we can either read through the referent or write through the //reference but not both at the same time
     println!("{reference:?}");
 }
    //the scope {} own the data so we can't return a reference but
    //only the data itself i.e moving to the outer scope
        let reference = {
        //we can't use the reference of data created in this scope in the outer //scope or we can't return the reference of data created in this scope
              let string = String::from("Created in this scope");
              //Returns the reference to String due to & operator
              //which returns the string slice
               &string
          };

A lifetime can be conditional and depending on the condition they can live longer or shorter.

    let mut x = 10;
    let y = &mut x;
    let bool_ = true;
    if bool_ {
    //if true then y can't be used after this
        x = 11;
    } else {
    //else the lifetime is valid until this scope.
    //so we can modify and print it
       *y+=11;
       println!("{y}");
    }
//We can't use y after the if-else because 
//we don't know which block will execute 
//so borrow checker refuses to compile
    
 //println!("{y}");
    
//But the referent can be read no matter which block is run
//because either way x is valid till here with a value of if or else branch
println!("{x}");

The variable itself is immutable, but it’s storing mutable references to the data so that we can modify the original data (referent) by dereferencing the reference variable. If the reference variable is mutable and it’s storing immutable references to the data, then it’s not possible to mutate the referent through the reference variable. The mut in front of the variable and mut on the right side have different semantics depending on the types.

A region may be nested; the inner scope can refer to the data in the outer scope because the outer scope is invalidated after the inner scope (’a, ’b: ’a). The outer scope can’t outlive the owner it’s borrowing the data from.

 
   let a= 10;
   let b : &i32= &a; //'a
   let c : &&i32 = &b; //&'b &'a i32 
   {
        let d = &a; // 'c: 'a, a scope of c is bound to the scope of a, or in other words variable a outlives the variable d. 
  }

We don’t have to worry about lifetimes except in a generic context because compilers automatically infer lifetimes most of the time. Lifetime errors are not ergonomic to debug in certain contexts. Some lifetimes are hidden within libraries or abstracted from us.

Simple lifetime examples do not hide the complexity of lifetime analysis. Higher kinded types, subtyping, and the kind of variance they depend on in usage are not addressed here due to my lack of usage of those types in my projects.

The distinction between types is what allows Rust to have different APIs that satisfy the borrow checker and ownership rules. To understand which methods borrow (mutably/immutably) and which ones move ownership, you can refer to the documentation of collection types and Iterators. Without these APIs, our experience in the Rust ecosystem would be challenging.

Region/Scope-based Resource Management

This approach involves a static method of memory management that addresses several concerns:

  1. It prevents mistakes common in manual memory management.
  2. It avoids the need for a garbage collector.
  3. It eliminates the need for direct programmer intervention.

Instead of allocating memory freely and deallocating it in various places, this method restricts memory allocation and deallocation to specific lexical scopes or regions. Memory or resources are allocated within a region and deallocated when the scope ends. The borrow checker ensures that references within a region are not used outside of that scope. Consequently, issues like memory leaks, prolonged memory retention, and temporal memory errors such as “User after free (UAF),” “Dangling pointers (DP),” and “Double free (DF)” are eliminated.

Data structures like arrays, strings, vectors, and their slice variants include associated length information. This enables runtime bound checks or eliminates them in cases where the compiler can deduce that bound checks are unnecessary. This addresses spatial memory errors such as “Out Of Bounds (OOB).”

Rust, however, provides an explicit way to leak memory using Box::leak. In situations where other languages might result in a crash, Rust prefers to catch errors at compile time. For example, not crashing due to “User after free,” “Dangling pointers,” “Double free,” “Null Pointer Exception (NPE),” or using uninitialized values means that runtime surprises causing application crashes are averted.

Rust’s approach involves trade-offs. While it might lead to crashes during development, these crashes often help catch errors early on, preventing them from reaching production and causing unpredictable behaviors.

 let vector: Vec<i32> = vec![1, 5, 7, 87, 231];
    //0 to 3 and 4 are exclusive
    accept_sub_slice(&vector[..4]);
    //0 to 4 because it's inclusive
    accept_sub_slice(&vector[..=4]);
    
    fn accept_sub_slice(slice: &[i32]) {
    //even though the vector has a length of 5
    //but we can't access beyond the ranges we specified
    //when calling
    println!("{}", slice[4]);
}

The function accepts a subslice of the vector. Inside the function body, we can’t access elements beyond the ranges we specified when calling, even though the vector has more elements to index. The indexing operator will panic if the bound is greater than or equal to the length, as collections are zero-indexed. The get and get_mut methods return an Option, allowing us to use explicit handling to safely index the data instead of panicking.

RAII pattern:

Resources encompass heap memory, database handles, locks, system resources like sockets and files, or any type that automatically cleans up using the Drop trait when the scope ends. This mirrors the deterministic and predictable performance of manual memory management in C/C++, yet with the added advantage of safety and without requiring programmer intervention.

use std::net::TcpListener;
fn main() {
    {
        let tcp = TcpListener::bind("127.0.0.1:8090").unwrap();
        //std::mem::forget(tcp);
    }
    {
        let socket = TcpListener::bind("127.0.0.1:8090").unwrap();
    }
}

In the code above, I am demonstrating that we don’t have to explicitly close the socket. To illustrate this, I created another socket listening on the same port in a different scope. If the first socket isn’t closed, the program will panic because the operating system doesn’t allow the port to be used unless a program finishes using it. If this is the case, we will receive an error indicating an AddrInUse. This is what happens when you uncomment the line above, as mem::forget() takes ownership but doesn’t perform any actions, meaning the socket remains in use. Consequently, it will cause a panic when we call unwrap() on the second socket binding. This aligns with how other resources are handled in Rust.

Here are some use cases that demonstrate the benefits of ownership-based systems:

Hardware-based isolation incurs non-trivial performance costs, whereas software-based isolation doesn’t carry the same expense as hardware-based isolation. However, implementing software-based isolation in a programming language with unrestricted aliasing (like C) would be challenging to achieve efficiently, as our assumptions could fail in the presence of aliasing, complicating static analysis. Due to Rust’s single ownership and restricted aliasing model, the paper System Programming in Rust: Beyond Safety demonstrates that Rust enables the implementation of zero-overhead software-based isolation without significant runtime overhead or reliance on hardware-specific features.

Even though applications based on blockchains are more secure, it’s crucial that the programming language itself is safe and can effectively express the patterns they aim to uphold in their applications. This is why blockchain-based applications are developed using languages like Solidity and Obsidian, rather than Java or even C++. These languages’ type systems can express contracts and prevent many errors at compile time that other languages can’t. Nevertheless, the landscape is changing, with platforms like Solana and Coswasm using Rust in combination with WebAssembly to develop web3 platforms. Rust’s advanced type system is versatile enough to be used in any application where security and reliability are paramount. Rust is a hybrid language, not just in comparison to other programming languages, but also within its own features. It encompasses Ownership types and Non-Ownership types, Safe (under the borrow checker’s radar) and unsafe (programmer’s responsibility) Rust, as well as safe and non-safe thread data structures, all while adhering to Rust’s safety principles and usability. This versatility extends to domain-specific languages.

Understanding ownership and borrowing is crucial for comprehending and writing code using other language features such as traits, generics, closures, pattern matching, struct and enum implementations, and concurrency primitives. All of these were designed based on ownership and borrowing rules.