Introduction

Welcome to the Polkadot SDK Best Practices guide. This book compiles essential insights and best practices derived from BlockDeep's comprehensive code reviews across various Polkadot projects. The purpose of this guide is to help developers, reviewers, and contributors understand and address common issues that can arise in blockchain development when building with Polkadot SDK.

Why this guide?

Polkadot SDK is a powerful and flexible framework, but developing robust, secure, and efficient blockchain applications requires attention to detail and adherence to best practices. This guide serves as a practical reference to help you avoid common pitfalls, enhance code readability, and ensure optimal performance and security.

Structure

This guide is organized by issue severity to help prioritize improvements. Each section covers a severity level:

  • Critical: Issues that pose significant security or performance risks and require immediate action.
  • High: Issues that may not be catastrophic but could impact performance, security, or stability.
  • Medium: Issues that affect maintainability, readability, and efficiency, important to address for a well-optimized codebase.
  • Low: Minor concerns that can improve code clarity and efficiency, but with limited immediate impact.
  • Informational: Suggestions and recommendations that help maintain best practices but are not essential.

Each issue is presented with:

  • Description: An overview of the issue and its potential impact.
  • What should be avoided: Examples illustrating the risks or inefficiencies of poor implementation.
  • Best practice: Suggested best practices, often with code snippets, to guide proper implementation.

How to use this guide

Whether you are conducting a code review, refactoring, or writing new Substrate pallets, this guide is designed to provide actionable insights and practical solutions. We encourage readers to consult this guide regularly to maintain high standards of quality, security, and efficiency in their projects.

Let’s get started!

This introduction sets the purpose and structure of the guide, encouraging readers to use it actively in their development and review processes.

SeverityIssueProblemSolution
CriticalPrioritize reserve asset transfer over teleportUsing teleports from multiple origins as default token transfer method requires bilateral trust in issuance management and increases riskPrioritize Reserve Asset transfers and use trusted reserves like Asset Hub; carefully configure xcm-executor permissions
CriticalUse appropriate origin checksOpen access on extrinsics without checks may allow unauthorized actions that can compromise securityAdd access control checks to limit access to specific users or roles
CriticalAvoid unbounded iterationUnbounded iterations over large data structures can lead to resource exhaustion and potential denial of serviceImplement limits or use a bounded storage map for these iterations
CriticalUnchecked input dataLack of input validation can lead to unexpected behaviors and potential vulnerabilitiesValidate input data before processing to ensure safe and predictable behavior
CriticalAvoid unwrap usage inside runtimeUsing unwrap() or expect without proper error handling can lead to runtime panics and crashesHandle errors gracefully with Result or Option types to prevent panics
CriticalUse benchmarking for accurate dynamic weightsUsing hardcoded weights for extrinsics can lead to inaccurate resource estimations and performance issues.Implement benchmarking to dynamically assess the weights of functions, ensuring they accurately reflect execution costs
HighMake proper usage of XCM JunctionsMisuse of junction types (especially GeneralIndex) for purposes beyond their intended entity representation can lead to incorrect path routingUse junctions strictly for their intended purpose of representing entities in Location paths; propose RFCs for new needs
HighProperly setup XCM BarrierImproperly configured XCM executor barriers can allow unauthorized free execution from any originImplement restrictive barriers with explicit authorization for unpaid execution and clear documentation of intended uses
HighEnsure consistent asset registration by adhering to host chain schemaInconsistent asset registration schemas across chains can lead to integration issues and complications in cross-chain asset handlingStudy and follow the host chain's established schema for asset registration to maintain consistency
HighBenchmark Extrinsic Worst-case ScenarioWithout benchmarks for worst-case scenarios, execution weights may be underestimatedBenchmark worst-case paths and update extrinsics to reflect these cases accurately
HighKeep dependencies up to dateUsing outdated libraries may lead to security and compatibility issuesRegularly update dependencies to the latest stable versions for improved security and compatibility
HighAvoid the usage of pseudo random numbersUsing non-deterministic methods for selection can introduce manipulation opportunitiesAdopt deterministic selection methods to ensure fairness
HighUse safe arithmetic operationsUnchecked arithmetic operations can lead to overflow errorsUse safe math functions such as checked_add to prevent overflows
HighBe careful with storage growthAllowing unlimited entries in storage structures can lead to overflow and performance issuesUse bounded storage collections to prevent uncontrolled growth
HighPrevent inconsistent state by distributing finalization costsRelying on a single transaction to finalize multiple operations can lead to errors if it failsUse a claim-based or distributed finalization approach to avoid reliance on a single transaction
HighUse atomic operations to prevent state inconsistenciesModifying multiple resources without transactional integrity may leave the system in an inconsistent stateImplement rollback mechanisms to ensure consistency in case of failure
HighAvoid redundant storage access in mutationsUsing both try_mutate and insert leads to unnecessary storage accessesUse try_mutate or try_mutate_exists to read, modify, and write in a single step
HighPrevent unnecessary reads and writes in storage accessFrequent reads and writes to storage without optimization can degrade performanceUse efficient storage access methods such as try_mutate to combine reads and writes
HighImplement try-state HookThe absence of try-state hooks prevents runtime sanity checks, making it harder to ensure that the storage state is sensible after upgradesImplement the try-state hook to perform thorough state checks without altering storage
MediumImplement proper XCM fee managementUsing the FeeManager unit type without consideration leads to unintended fee burning rather than proper fee handlingImplement proper FeeManager that either deposits or distributes fees, with clear handling of fee-exempt locations
MediumAvoid unrestricted XCM ExecutionAllowing arbitrary users to send unrestricted XCM instructions, especially Transact operations, can create security vulnerabilitiesDisable or strictly limit XCM execution permissions unless specifically required; restrict to privileged users
MediumRemove deprecated storage gettersUsing deprecated storage getters may lead to compatibility issues in future versionsReplace deprecated getters with the recommended methods in updated frameworks
MediumAvoid hardcoded parameters and valuesHardcoding parameters can reduce flexibility and adaptability to different environmentsUse configurable parameters to enhance adaptability
MediumInclude tests for edge casesOmitting tests for boundary cases can lead to unhandled conditions and bugsInclude tests for boundary conditions to improve reliability
MediumInclude extrinsic documentationExtrinsics without documentation can lead to misunderstandings regarding usage permissions and error handlingProvide detailed documentation for each extrinsic, including functionality and parameters
MediumInclude error documentationLack of documentation on error variants can make debugging difficult and slowDocument each error variant with its cause and handling details for easier troubleshooting
MediumProvide event documentationEvents emitted by the runtime lack proper documentation, making it harder for users to understand their purposeProvide detailed comments for each event to explain its purpose and usage
MediumProvide pallet configuration documentationPallet configuration items that lack documentation can confuse developers and usersDocument each pallet configuration item with a brief description of its purpose and constraints
MediumModularize large filesLarge files reduce readability and make navigation difficult for developersProvide detailed comments for each event to explain its purpose and usage
MediumBreak down complex functionsComplex functions are harder to test, understand, and maintain, increasing the risk of errorsApply the single responsibility principle to simplify functions and improve readability
MediumEnhance performance with efficient data structuresChoosing inefficient data structures can lead to slowdowns and increased resource usageUse search-efficient data structures like HashSetor BTreeSet for frequent lookups
MediumDefine constants to replace magic numbersMagic numbers make code hard to understand and maintain due to lack of contextDefine constants with descriptive names for better readability
MediumImplement Proper Interface SegregationOverloaded interfaces make code harder to maintain and test due to complex dependenciesSeparate interfaces into smaller, focused traits to improve modularity
MediumMake BoundedVec size configurableHardcoded BoundedVec sizes limit flexibility and adaptabilityUse configurable parameters for vector sizes to enhance flexibility
MediumEnhance logging in migration scriptsInsufficient logging in migration scripts makes tracing progress and debugging harderAdd descriptive logs to migration scripts to track steps and conditions
MediumAvoid redundant data structuresStoring the same data in multiple locations increases complexity and risks inconsistenciesUse a single structure as the primary source for data, and avoid duplicating fields across storage structures
MediumImplement tests for all error casesLack of tests for error cases in extrinsics can lead to unhandled scenarios and unpredictable behaviorAdd tests that verify expected errors are emitted when invalid inputs or conditions are encountered
MediumAvoid resource intensive execution inside hooksPerforming complex or large computations in hooks like on_finalize can slow block execution and reduce network performanceDistribute computations across extrinsics or allow users to manually trigger them outside of hooks.
MediumTransition away from Currency traitUsing the deprecated Currency trait limits compatibility and functionality in future Substrate updatesReplace Currency with fungible traits, like Inspect and Mutate, for modular, future-proof balance management
LowUse appropriate naming conventionsInconsistent naming conventions reduce code readabilityAdopt consistent, descriptive naming conventions across the codebase
LowAvoid unnecessary cloningRedundant code and cloning increase code size and decrease efficiencyRemove unused code and optimize cloning operations
LowAvoid hardcoded error messagesHardcoded error messages make localization and updates difficultCentralize error messages for easier updates and localization
LowAdopt enumerations for optional inputUsing basic types instead of enums can lead to errors and reduces readabilityUse enums to represent distinct categories for better readability and robustness
LowImplement descriptive loggingMinimal logging lacks context, making troubleshooting difficultInclude context and relevant details in log messages
LowRemove unnecessary return valuesReturning values that are not modified or needed increases code complexityRemove unnecessary return values for simplicity
LowAvoid repetitive generic type instantiationDefining complex generic types repeatedly increases verbosity and reduces maintainabilityUse a type alias for specific instances of generic types to avoid duplication and enhance code readability
LowUpdate benchmarks with deprecated syntaxDeprecated benchmarking syntax can lead to compatibility issues and lacks support for newer featuresUse the updated #[benchmarks] module syntax to improve maintainability, readability, and future compatibility
LowExpose runtime APIs for key functionalitiesFailing to expose useful internal functions via runtime APIs limits client access and reduces system usabilityImplement Runtime APIs to expose key functions, enabling users and clients to access essential data
LowAvoid unused codeUnused code can clutter a codebase, making it more difficult to read, maintain, and optimize.Regularly remove unused functions, variables, and redundant logic to keep code clean and efficient
InformationalUse proper naming criteriaUsing commonly used terminology (like "foreign") can cause confusion and misunderstandings, especially when integrating with existing systemsResearch ecosystem terminology and choose unique, clear names that avoid overlap with existing well-known terms
InformationalMaintain consistent documentation standardsInconsistent documentation across modules creates knowledge gapsEstablish a consistent documentation standard across the codebase
InformationalAvoid typographical errorsTypos reduce professionalism and may confuse readersPerform proofreading to catch typos and improve clarity
InformationalMake backend logic Frontend-AgnosticFrontend-specific values in backend code may lead to conflicts with backend designEnsure backend remains frontend-agnostic to avoid inconsistencies

Critical Severity Issues

Description

In the context of Polkadot SDK, critical issues represent the most severe vulnerabilities or performance concerns. These issues can lead to catastrophic failures, significant security breaches, data loss, or system outages. When unaddressed, they may compromise core blockchain functionalities, resulting in a complete breakdown of system integrity or operational stability.

Implications

Failure to resolve critical issues can expose the blockchain network to significant risks, including malicious attacks, economic security vulnerabilities, or denial of service. In production environments, these issues can cause irreparable harm to network reliability, user trust, and the overall ecosystem. Critical issues demand immediate attention and should be prioritized for resolution above all other concerns.

Use Appropriate Origin Checks

Severity: Critical

Description

Leaving critical or privileged extrinsics without proper origin checks can allow unauthorized actions, potentially compromising security and functionality. Critical operations must enforce strict access control to ensure that only authorized users or roles can execute them.

What should be avoided

In the following code, the execute function can be called by any user, which may lead to unauthorized or malicious actions:

#![allow(unused)]
fn main() {
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::execute_critical_operation())]
pub fn execute_critical_operation(origin: OriginFor<T>) -> DispatchResult {
    // Function with unrestricted access
    execute_critical_operation();
}
}

In this example:

  • The extrinsic can be executed by anyone because there are no access control checks in place, which can be particularly problematic for critical chain operations.

Best practice

Implement appropriate origin checks to restrict function access to specific users or roles, such as elevated origins, to protect critical functions.

Example 1

#![allow(unused)]
fn main() {
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::execute_critical_operation())]
pub fn execute_critical_operation(origin: OriginFor<T>) -> DispatchResult {
    // Restrict access to the root (admin) user
    ensure_root(origin)?;

    // Secure function logic here
    execute_critical_operation();
}
}

In this example:

  • Using ensure_root enforces that only users or groups with elevated permissions can execute this function.

Example 2

#![allow(unused)]
fn main() {
// ---- In pallet/lib.rs ----
#[pallet::config]
	pub trait Config: frame_system::Config {
        //....
        /// Origin allowed to execute critical Operations.
		type AuthorizedOrigin: EnsureOrigin<<Self as frame_system::Config>::RuntimeOrigin>;
    }

#[pallet::call]
impl<T: Config> Pallet<T> {
    #[pallet::call_index(0)]
    #[pallet::weight(T::WeightInfo::execute_critical_operation())]
    pub fn execute_critical_operation(origin: OriginFor<T>) -> DispatchResult {
        // Use custom AuthorizedOrigin check.
        T::AuthorizedOrigin::ensure_origin(origin)?;

        // Secure function logic here.
        execute_critical_operation();
    }
}
}

In this example:

  • The pallet uses a configurable custom origin AuthorizedOrigin to specify which entities are allowed to execute the extrinsic.

Avoid Unbounded Iterations

Severity: Critical

Description

Unbounded iterations over large data structures in Substrate runtimes can lead to excessive weight consumption during transaction execution. Every operation must account for its computational cost to maintain security and prevent abuse. Unbounded iterations can result in spam or denial-of-service (DDoS) attacks if an extrinsic consumes too much weight, blocking other operations and degrading blockchain performance.

What should be avoided

The following example illustrates an iteration over all items in big_data_set without any limit. In a blockchain context, such an operation can lead to unpredictable execution times and excessive resource usage:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type UnboundedData<T: Config> = StorageValue<_, Vec<u32>;

let big_data_set = UnboundedData::<T>::get();
for item in big_data_set {
    // Process each item with no limit
}
}

Best practice

Option 1: Process up to a maximum number of elements

Use a bounded iterator or limit the number of items processed in each iteration. This approach prevents excessive resource usage and keeps operations predictable, ensuring the execution weight remains within acceptable bounds.

#![allow(unused)]
fn main() {
const MAX_ITEMS: usize = 20;

#[pallet::storage]
pub type UnboundedData<T: Config> = StorageValue<_, Vec<u32>;

let big_data = UnboundedData::<T>::get();
for item in big_data.iter().take(MAX_ITEMS) {
    // Process a limited number of items safely
}
}

By setting a maximum limit, you can control the processing load and avoid potential performance issues that could compromise the blockchain's stability.

Option 2: Use a bounded data structure

Alternatively, use bounded data structures to enforce strict size limits at the data storage level. This ensures that the maximum number of iterations is predefined and controlled.

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type BoundedData<T: Config> = StorageValue<_, BoundedVec<u32, T::MaxEntries>>;

let bounded_data = BoundedData::<T>::get();
for item in bounded_data {
    // Iterates over a data structure with bounded size.
}
}

Using a bounded data structure ensures compliance with weight management policies and reduces the risk of performance bottlenecks or attacks on the blockchain.

Unchecked Input Parameters

Severity: Critical

Description

In Substrate runtime development, lack of input validation can lead to vulnerabilities and unexpected behaviors. Unverified inputs might result in invalid states, security issues, or exploitation by malicious actors, disrupting the normal operation of the blockchain.

What should be avoided

The following code accepts input without validating its range or format, which can lead to unintended results:

#![allow(unused)]
fn main() {
fn store_execution_time(hour_of_day: u8) {
    ExecutedAt::<T>::insert(hour_of_day);
}
}

In this example:

  • The hour_of_day input should only be valid if it is between 0 and 23 hours. Otherwise, the input is invalid and should return an error.

Implement input validation to ensure data meets expected constraints. This example enforces a maximum limit to avoid out-of-range values:

#![allow(unused)]
fn main() {
fn store_execution_time(hour_of_day: u8) -> Result<(), Error> {
    // Validate input before processing
    ensure!(hour_of_day <= 23, Error::<T>::TimeOutOfRange);

    ExecutedAt::<T>::insert(hour_of_day);
    Ok(())
}
}

By validating inputs, we prevent potentially harmful values from entering the system, enhancing reliability and security.

Avoid Unwrap Usage Inside Runtime

Severity: Critical

Description

Using unwrap() and similar methods for error handling in Substrate runtime/pallet code can lead to runtime panics, which are particularly dangerous in a decentralized network. Such panics can cause block production to halt, disrupt the network, and compromise the reliability of the blockchain. Explicit error handling ensures more robust and predictable behavior, safeguarding the system against unexpected conditions in production environments.

What should be avoided

The following example demonstrates how using unwrap() for error handling can result in runtime panics, which are neither user-friendly nor safe in a blockchain context:

#![allow(unused)]
fn main() {
// Create an empty vector: [ ]
let my_data = Vec::<u32>::new();

// Potential panic if index 0 is empty
let value = my_data.get(0).unwrap();

// Same error, with implicit unwrapping
let value = my_data[0];
}

In a Substrate runtime, such panics could stop block production, making the entire chain unrecoverable or in the best case provoking downtimes.

Best practice

Handle errors explicitly by using Result and provide descriptive error messages:

#![allow(unused)]
fn main() {
#[pallet::error]
pub enum Error<T> {
	// Custom error
	MyError,
}

let my_data = Vec::<u32>::new();

// Gracefully handling the error
let value = my_data.get(0).ok_or(Error::<T>::MyError)?;
}

In this example:

  • The code safely handles the case where my_data.get(0) might be None, allowing for graceful error handling and reducing the risk of panics.

Use Benchmarking for Accurate Dynamic Weights

Severity: Critical

Description

Hardcoding weights for extrinsics in a blockchain can lead to significant inaccuracies in execution resource estimation. When weights are fixed, they may not reflect the actual execution costs or resource usage, resulting in either overestimation or underestimation.

A hardcoded weight might underestimate the cost of processing transactions with complex logic, resulting in unexpected execution costs and causing issues when building a block. Conversely, overestimated weights could prevent some transactions from executing, wasting network resources and limiting scalability.

What should be avoided

Avoid using hardcoded weights directly in the function definition of extrinsics.

#![allow(unused)]
fn main() {
#[pallet::call_index(0)]
#[pallet::weight(
    // Hardcoded weights
    Weight::from_parts(10_000, 0) + T::DbWeight::get().reads_writes(1, 1)
)]
pub fn do_something(
    origin: OriginFor<T>,
    some_data: u32
) -> DispatchResult {
    // Extrinsic logic
}
}

In this example:

  • The weight is fixed, leading to potential inaccuracies in resource estimation, which can result in suboptimal performance and affect transaction processing on the network.

Best practice

Implement proper benchmarking to dynamically assess the weights of your functions. This process involves measuring the actual execution costs during test runs and then applying the generated weights in your extrinsic definitions.

#![allow(unused)]
fn main() {
// benchmarking.rs file
#[benchmark]
fn do_something() {
    // Setup
    // ...

    // Execution
	#[extrinsic_call]
	_(RawOrigin::Signed(account), 5u32);
}

// Execute the benchmarks and then add the corresponding WeightInfo
// to the extrinsic in the lib.rs file
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::do_something())]
pub fn do_something(
    origin: OriginFor<T>,
    some_data: u32
) -> DispatchResult {
   // Extrinsic logic
}
}

Prioritize Reserve Asset Transfer Over Teleport

Severity: Critical

Description

In a multi-consensus system environment where interconnections facilitate value transfers among participants, it is essential to select the most effective asset transfer mechanism for integration with peers.

Currently, XCM and XCMP serve as solutions for such interactions. Given their "fire-and-forget" nature, it is crucial to understand the differences between teleporting assets and using the reserve asset transfer mechanism:

  • Teleport: Assets are burned at the origin and minted at the destination, requiring full trust in issuance management between both parties.
  • Reserve transfer: Assets are deposited into a commonly trusted reserve, with a representative derivative minted at the destination. This approach allows both parties to rely on a globally trusted reserve, from which they can audit issuance.

The key distinction lies in trust: teleportation requires bilateral trust in issuance management, while reserve transfers depend on a single, globally trusted reserve for transparency and auditability.

What should be avoided

Accepting teleports from multiple origins and setting teleportation as the default method for sending and receiving tokens across other consensus systems should be avoided.

These configurations can vary, but adjustments in xcm-executor require extreme caution. For instance, in an extreme scenario, allowing all teleports without restriction could lead to vulnerabilities; even selectively permitting specific teleports requires careful review.

#![allow(unused)]
fn main() {
type IsTeleporter = Everything;
}

Prioritize Reserve Asset transfers as the default token transfer method through consensus systems and use a trusted reserve such as Asset Hub.

#![allow(unused)]
fn main() {
pub type Reserves = ReserveAssetsFrom<AssetReserveLocation>;
// ...
type IsReserve = Reserves;
}

In this example, only assets from a predefined reserve are accepted via Reserve transfer. Adjust the level of permissiveness to suit your use case, and ensure sufficient time is spent making an informed decision.

High Severity Issues

Description

High-severity issues are serious problems that, while not immediately threatening the system's stability, pose a significant risk to its operation. In the context of Polkadot SDK, these issues can disrupt key functionalities, create vulnerabilities, or degrade performance over time. Unlike critical issues that can cause immediate chain halts or data corruption, high-severity issues tend to manifest gradually but can still severely impact the network if left unresolved.

Implications

Unresolved high-severity issues can lead to persistent operational inefficiencies or even chain stalls under certain conditions, especially if they affect essential runtime logic such as storage management or resource-heavy calculations. These issues increase the likelihood of security exploits and degrade the user experience, potentially eroding trust in the network. High-severity issues should be addressed with urgency, as their prolonged existence could escalate into critical problems, threatening the network’s performance and reliability.

Benchmark Extrinsic Worst-case Scenarios

Severity: High

Description

In Polkadot SDK, benchmarks are used to measure the computational cost of runtime operations, such as extrinsics, storage accesses, and logic execution. These costs are quantified in weights, which represent the time and resources required to execute a specific operation on the blockchain.

Through benchmarking, developers define the WeightInfo trait, which associates each extrinsic with a weight value based on the benchmarks. The runtime then uses these weights to enforce limits on block execution and calculate fees dynamically.

By performing benchmarks that account for worst-case scenarios, developers ensure their runtime remains efficient, secure, and scalable under real-world conditions. However, benchmarks that only cover typical scenarios may underestimate execution weights, potentially leading to resource overuse or transaction failures in real-world usage. To ensure accurate weight calculations, benchmarks must account for the heaviest possible execution path.

What should be avoided

The following code benchmarks a typical scenario, which may not account for the heaviest possible execution path, leading to underestimated weights:

Example 1

#![allow(unused)]
fn main() {
#[benchmark]
fn typical_scenario() {
    // Benchmark with a small data set
    let items = generate_data(10);

    #[block]
    {
      process_items(items);
    }
}
}

In this example:

  • The benchmark uses a small data set (10 items), which may not reflect the workload in a worst-case scenario.

Example 2

#![allow(unused)]
fn main() {
// ---- In pallet/lib.rs ----
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::some_extrinsic())]
pub fn some_extrinsic(origin: OriginFor<T>, input: bool) -> DispatchResult {
  let computation_result = match input {
    true  => 1,
    false => very_heavy_function(),
  };
  
  // Do something with the result.

  Ok(())
}

// ---- In pallet/benchmarks.rs ----
#[benchmark]
fn some_extrinsic() {
  let input = true;

  // Execution
  #[extrinsic_call]
  _(RawOrigin::Signed(account), input);
}
}

In this example:

  • The worst path occurs when input is false, as this triggers the very_heavy_function call. However, in the benchmark, input is set to true, meaning the benchmark will only measure the faster execution path. Consequently, the calculated execution cost will be an underestimate.

Best practice

Benchmark at least the worst-case path by simulating the heaviest possible workload, ensuring the calculated weight accurately reflects maximum resource usage.

Example 1

#![allow(unused)]
fn main() {
// ---- In pallet/benchmarks.rs ----
#[benchmark]
fn worst_case_scenario(s: Linear<1, MAX_ITEMS>) {
    // Benchmark with dynamic data
    let items = generate_data(s);

    #[block]
    {
      process_items(items);
    }
}
}

In this improved example:

  • generate_data(s) creates the maximum allowed data set to simulate a heavy load.
  • By covering the worst-case path, this benchmark provides a realistic weight that prevents unexpected performance issues during peak loads.

Example 2

#![allow(unused)]
fn main() {
// ---- In pallet/benchmarks.rs ----
#[benchmark]
fn some_extrinsic() {
  // Set the value to execute the worst-case path.
  let input = false;

  // Execution
  #[extrinsic_call]
  _(RawOrigin::Signed(account), input);
}
}

Example 3

For mission-critical extrinsics that are used frequently, consider benchmarking each execution path independently for finer weight granularity:

#![allow(unused)]
fn main() {
// ---- In pallet/lib.rs ----
#[pallet::call_index(0)]
#[pallet::weight(
     T::WeightInfo::some_extrinsic_path_1().max(
     T::WeightInfo::some_extrinsic_path_2())
)]
pub fn some_extrinsic(origin: OriginFor<T>, input: bool) -> DispatchResultWithPostInfo {
  let (result, weight) = match input {
    true  => (1, T::WeightInfo::some_extrinsic_path_1()),
    false => (very_heavy_function(), T::WeightInfo::some_extrinsic_path_2())
  };
  
  // Do something with the result.

  Ok(Some(weight).into())
}

// ---- In pallet/benchmarks.rs ----
#[benchmark]
fn some_extrinsic_path_1() {
  // Set the value to execute the path where the variable is true.
  let input = true;

  // Execution
  #[block]
  {
      some_extrinsic(RawOrigin::Signed(account), input);
  }
}

#[benchmark]
fn some_extrinsic_path_2() {
  // Set the value to execute the path where the variable is false.
  let input = false;

  // Execution
  #[block]
  {
      some_extrinsic(RawOrigin::Signed(account), input);
  }
}
}

In this example:

  • The fee corresponding to the worst-case execution path is initially charged to the user.
  • If the actual execution consumes fewer resources than the worst-case scenario, the difference is refunded to the user.

Keep Dependencies Up to Date

Severity: High

Description

Using outdated libraries can introduce security vulnerabilities, compatibility issues, and reduced performance, as older dependencies may lack recent bug fixes and security patches. In blockchain development, using outdated dependencies is particularly risky because the decentralized nature of the ecosystem amplifies the consequences of vulnerabilities or compatibility issues. Unlike traditional applications, blockchain systems operate in trustless environments where security, reliability, and performance are paramount.

What should be avoided

Example 1

The following code relies on an outdated library that may contain deprecated or insecure functions:

#![allow(unused)]
fn main() {
use outdated_library::deprecated_fn;
}

Example 2

The following configuration uses an outdated branch of the Polkadot-SDK repository, which may include deprecated features, unresolved issues, or security vulnerabilities:

# In Cargo.toml
sp-runtime = { git = "https://github.com/paritytech/substrate.git", branch = "polkadot-v1.0.0" }

Best practice

Regularly update dependencies to their latest stable versions, ensuring the use of secure and well-maintained functions:

Example 1

Replace deprecated functions by migrating to the latest version of the library:

#![allow(unused)]
fn main() {
use latest_library::safe_fn;
}

Example 2

Ensure you are using the most recent stable branch of the Polkadot SDK repository:

# In Cargo.toml
sp-runtime = { git = "https://github.com/paritytech/polkadot-sdk.git", branch = "polkadot-stable2409" }

By updating to the latest versions, you benefit from security improvements, performance enhancements, and better compatibility, ensuring a more robust and secure application.

Avoid the Usage of Pseudo Random Numbers

Severity: High

Description

In Substrate runtime development, using non-deterministic methods for critical processes, such as selecting or assigning tasks, can create opportunities for manipulation. This may allow certain participants to be chosen more frequently or unfairly, undermining the trust and fairness essential to decentralized systems.

What should be avoided

The following code relies on a random selection, which can lead to inconsistent results and potential bias:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type Verifiers<T: Config> = StorageValue<_, Vec<T::AccountId>;

fn assign_verifier() -> Verifier {
    let verifiers = Verifiers::<T>::get();

    // Random selection from the list
    let index = rand::random::<usize>() % verifiers.len();
    let verifier = verifiers[index];
    // Task assigned to verifier unpredictably
    verifier.clone()
}
}

In this example:

  • rand::random::<usize>() % verifiers.len() selects a verifier at random, which could result in unfair frequency of selection for certain verifiers.

Best practice

Implement a deterministic selection method, such as using an ID as a basis to ensure a fair, repeatable selection:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type Verifiers<T: Config> = StorageValue<_, Vec<T::AccountId>;

fn assign_verifier_deterministic(task_id: u32) -> Verifier {
    let verifiers = Verifiers::<T>::get();

    // Deterministic selection based on task ID
    let index = task_id as usize % verifiers.len();
    verifiers[index].clone()
}
}

In this improved example:

  • The verifier is selected based on task_id, ensuring that each task consistently maps to a specific verifier in a fair, predictable way.
  • This approach prevents any single verifier from being chosen disproportionately, ensuring fair distribution and reducing the risk of manipulation.

Use Safe Arithmetic Operations

Severity: High

Description

In Substrate runtime development, uncontrolled overflow or underflow is a critical issue because it can cause the chain to stall. Substrate’s deterministic execution model requires every node in the network to process transactions in the exact same way. If an unchecked arithmetic operation causes an overflow or underflow, the runtime will panic, leading to an unrecoverable state for the affected block.

What should be avoided

The following code performs addition without checking for overflow, which may cause the program to wrap around to an unintended value:

#![allow(unused)]
fn main() {
// Potential for overflow if a + b exceeds the maximum value of the type.
let total: u16 = a + b;
}

In this example:

  • If a and b are large, the result may exceed the data type’s maximum value, causing an overflow and leading to incorrect results.

Best practice

Option 1: Use Checked Arithmetic

Use checked_add to return an error if the operation exceeds the maximum value:

#![allow(unused)]
fn main() {
let total: u16 = a.checked_add(b).ok_or(Error::<T>::Overflow)?;
}

In this example:

  • checked_add returns None if a + b would overflow, allowing us to handle the error with ok_or.
  • This ensures that the operation will only proceed if the result is within bounds, preventing overflow-related issues.

Option 2: Use Saturating Arithmetic

Alternatively, saturating_add ensures that the result will cap at the maximum value of the type rather than overflowing:

#![allow(unused)]
fn main() {
let total: u16 = a.saturating_add(b);
}

In this example:

  • saturating_add will set total to the maximum possible value if a + b exceeds the type’s limit.
  • This approach avoids panics or errors by safely capping the result, which is useful when an upper bound is acceptable in the application logic.

Both methods improve the reliability of arithmetic operations, ensuring predictable behavior without overflow.

Be Careful With Storage Growth

Severity: High

Description

Allowing unlimited entries in storage structures can lead to uncontrolled storage growth, resulting in overflow, increased costs, and performance issues during operations that manage these storage items. In a blockchain context, this can impact execution weight, hinder scalability, and degrade network performance.

What should be avoided

The following code allows adding entries without any limit, leading to uncontrolled storage growth:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type Entries<T: Config> = StorageValue<_, Vec<u32>>;

fn add_entry(entry: u32) {
    // Adds entries without limits
    Entries::<T>::mutate(|entries| {
        entries.push(entry);
    });
}
}

Best practice

Using BoundedVec, we can set a fixed maximum number of entries, enforcing storage limits directly within the data structure. This approach automatically restricts the growth of entries, enhancing efficiency.

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type Entries<T: Config> = StorageValue<_, BoundedVec<u32, T::MaxEntries>>;

#[pallet::error]
pub enum Error<T> {
	/// MaxEntries limit reached
	TooManyEntries,
}

fn add_entry_limited(entry: u32) -> Result<(), Error> {
    Entries::<T>::try_mutate(|entries| {
        entries.try_push(entry).map_err(|_| Error::<T>::TooManyEntries)?;
        Ok(())
    })
}
}

Here, the BoundedVec ensures that the number of entries cannot exceed T::MaxEntries, which enforces storage limits directly. This approach maintains predictable storage usage and efficient operations by preventing uncontrolled accumulation of data.

Prevent Inconsistent State by Distributing State Commitment Costs

Severity: High

Description

Relying on a blanket extrinsic to commit multiple storage entities or operations can lead to excessive costs, penalties, or errors, especially if the transaction fails. This approach risks incomplete operations and could lead to inconsistent state. In blockchain systems, atomicity is critical; partial updates or failures during state commitments can compromise system integrity and lead to unintentional bugs or vulnerabilities.

What should be avoided

In the following example, one function is responsible for committing all previous mutations, leading to a single point of failure and high resource usage:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type PendingOperations<T: Config> = StorageValue<_, Vec<UserOperation>>;

fn finalize_operations() {
    let pending_operations = PendingOperations::<T>::get();
    for operation in pending_operations {
        // Committing all items at once
        complete_operation(operations);
        commit_storage([users]);
        mutate_items([other_items]);
    }
}
}

Using this method:

  • A failure in processing any item could result in none of the operations being applied.
  • Attempting to commit all operations at once increases the execution weight, making it more likely to exceed block limits or cause out-of-gas errors.

Best practice

Use a claim-based approach where each user commits their operation individually, or use batch processing to distribute the load.

#![allow(unused)]
fn main() {
fn claim_operation(participant: T::AccountId) -> Result<(), Error> {
    // Each participant commits their own operation
    complete_operation_for(participant)?;
    Ok(())
}
}

This approach spreads finalization costs across participants, reducing the risk of a single transaction failing and making the system more scalable and resilient. By isolating operations to individual participants or smaller groups, the state remains consistent, and the system can handle larger workloads without risking failure.

Use Atomic Operations to Prevent State Inconsistencies

Severity: High

Description

Functions that modify multiple resources without transactional integrity may leave the system in an inconsistent state if an error occurs mid-operation. In a blockchain environment, where state consistency is critical to maintain trust and operational stability, this can lead to data corruption, incorrect balances, or unexpected behavior.

What should be avoided

Modifying multiple resources without rollback mechanisms can lead to partial updates if an error occurs:

#![allow(unused)]
fn main() {
fn transfer_funds(sender: &T::AccountId, recipient: &T::AccountId, amount: u32) {
    reduce_balance(sender, amount);

    // Ignoring the result. No rollback if this fails.
    let _ = increase_balance(recipient, amount);
}
}

In this example:

  • Lack of Error Handling: If increase_balance fails, there is no mechanism to revert the changes made by reduce_balance. This results in an inconsistent state where the sender’s balance is reduced, but the recipient’s balance remains unchanged, violating the atomicity of the transaction.

Best practice

Use atomic operations or implement rollback logic to ensure all changes are applied consistently:

#![allow(unused)]
fn main() {
// Add the Result<(), Error> return type to allow a
// rollback if an error occurs in any of the functions.
fn transfer_funds(sender: &T::AccountId, recipient: &T::AccountId, amount: u32) -> Result<(), Error> {
    reduce_balance(sender, amount)?;
    increase_balance(recipient, amount)?;
    Ok(())
}
}

In this example:

  • Transactional Integrity: If any part of the operation fails, the function returns an error, and no changes are committed.
  • Consistency: Both reduce_balance and increase_balance succeed or fail together, maintaining the state integrity of the system.

Ensuring atomicity in such operations prevents inconsistencies and safeguards the reliability of the blockchain state.

Avoid Redundant Storage Access in Mutations

Severity: High

Description

In Substrate runtime development, improper usage of try_mutate can lead to redundant storage operations, negatively impacting performance and increasing the overall weight of the extrinsic. Using both try_mutate and insert in the same closure causes unnecessary overhead by performing multiple storage accesses, which is particularly costly in terms of computation and I/O in a blockchain environment.

What should be avoided

Using try_mutate followed by insert in a nested closure results in redundant writes:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type MyStorage<T: Config> = StorageValue<_, u32>;

let new_value = 4_u32;
MyStorage::<T>::try_mutate(id, |item| -> Result<(), Error> {
    // Raise an error if the condition is not met
    ensure!(new_value < 10 && item.is_none(), Error::<T>::InvalidValue);
    
    *item = Some(new_value);

    // Redundant insert
    MyStorage::<T>::insert(id, new_value);

    Ok(())
})?;
}

Best practice

Use either try_mutate to modify the value in place or insert alone, but avoid combining them unnecessarily:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type MyStorage<T: Config> = StorageValue<_, u32>;

let new_value = 4_u32;
MyStorage::<T>::try_mutate(id, |item| -> Result<(), Error> {
    // Raise an error if the condition is not met
    ensure!(new_value < 10, Error::<T>::InvalidValue);
    
    *item = Some(new_value);
    Ok(())
})?;
}

This approach reduces storage accesses, improving both performance and resource efficiency.

Prevent Unnecessary Reads and Writes in Storage Access

Severity: High

Description

Accessing and writing to blockchain storage is resource-intensive due to the I/O cost of interacting with disk-backed or serialized structures and the algorithmic overhead of traversing key-value stores. Each read and write contributes to execution weight and transaction fees, making redundant operations inefficient and costly. These inefficiencies can degrade performance, reduce scalability, and increase user costs, as storage operations are among the most expensive in a runtime. Combining read and write logic into a single operation minimizes these costs, improving overall system efficiency and predictability.

Unnecessary reads and writes increase execution time and storage costs, impacting performance and efficiency. In a blockchain context, where every operation contributes to the overall transaction weight and cost, optimizing storage access is critical. Using try_mutate and similar methods allows you to wrap read-write logic into a single call, reducing the overhead associated with separate operations.

What should be avoided

Reading from and then writing to storage separately leads to redundant operations:

#![allow(unused)]
fn main() {
// Reading
let value = MyStorage::<T>::get();

// Writing
MyStorage::<T>::put(value + 1);
}

This pattern involves two storage accesses—one for reading and another for writing—resulting in increased computational and I/O costs.

Best practice

Use try_mutate or try_mutate_exists to combine read and write logic in a single, efficient operation:

#![allow(unused)]
fn main() {
// Reading and writing at once
MyStorage::<T>::mutate(|value| {
    value += 1;
});
}

This approach minimizes storage operations, improving both performance and resource usage. By consolidating the read and write into a single call, you ensure that the execution weight remains low while maintaining predictable and efficient storage access patterns.

Implement try-state Hook

Severity: High

Description

In Substrate, hooks are specialized lifecycle methods that allow pallets to perform operations at specific points during the blockchain runtime's execution cycle. Common hooks include on_initialize, on_finalize, and offchain_worker, each serving a distinct purpose, such as initializing block-specific logic, finalizing block-level operations, or executing off-chain tasks.

The try-state hook is a specialized hook used exclusively during try-runtime builds. Unlike other hooks that are part of normal runtime operations, try-state is designed to validate storage state without altering it. This makes it an invaluable tool for debugging and testing, as it enables developers to simulate runtime upgrades, migrations, and other critical changes in a controlled environment.

The absence of try-state hooks prevents runtime sanity checks, making it harder to ensure that the storage state is sensible after upgrades or other critical operations. These hooks are executed only when the runtime is built with the try-runtime feature and provide a mechanism to validate state without altering storage, ensuring consistency and correctness. It is a best practice to run try-runtime prior to any migration to catch potential issues in a controlled environment.

What should be avoided

Skipping sanity checks during runtime upgrades or migrations can leave storage inconsistencies unnoticed, leading to potential bugs:

#![allow(unused)]
fn main() {
fn on_runtime_upgrade() {
    // No validation of state consistency
    SomeStorage::<T>::mutate(|state| {
        *state = new_state;
    });
}
}

Best practice

Implement the try-state hook to perform thorough state checks without altering storage. Use try-runtime to simulate migrations and validate state consistency before deploying updates:

#![allow(unused)]
fn main() {
#[pallet::hooks]
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> {
    #[cfg(feature = "try-runtime")]
    fn try_state(_n: BlockNumber) -> Result<(), TryRuntimeError> {
        // Example: Ensure a storage map is non-empty.
        // No need to worry about weight consumption here.
        ensure!(!SomeStorage::<T>::iter().count().is_zero(), "Storage map is empty");

        // Add more sanity checks as needed.
        Ok(())
    }
}
}

The key benefits of this approach are:

  1. Controlled Validation: The try-state hook ensures that the state is checked rigorously without altering storage, making it safe for runtime use.
  2. Integration with try-runtime: When the node is built with the try-runtime feature, these hooks enable testing in a controlled environment, simulating runtime upgrades and migrations to catch potential issues early.
  3. Prevention of Errors: Running try-runtime before deploying migrations reduces the risk of deploying a broken runtime, ensuring system stability.

By implementing try-state hooks and leveraging try-runtime, developers can validate and maintain storage consistency while catching potential issues early in the development cycle.

Properly Setup XCM Barrier

Severity: High

Description

As the name suggests, the Barrier type in the commonly used XCM executor serves as entry protection for execution on a chain. It should be set explicitly to define which origins can execute without cost—such as for un-bricking purposes—and which should incur charges. Additionally, consider carefully which origins should be entirely restricted from access via XCM.

As a rule of thumb, it is always recommended to be as restrictive as possible.

What should be avoided

In the following example, there is a clear vulnerability: any origin can execute XCM for free on the chain using this configuration:

#![allow(unused)]
fn main() {
pub type Barrier = (
	xcm_builder::AllowUnpaidExecutionFrom<ThisParachain>,
	WithComputedOrigin<
		(AllowUnpaidExecutionFrom<ParentRelay>, AllowExplicitUnpaidExecutionFrom<Everything>),
		UniversalLocation,
		ConstU32<1>,
	>,
);
}

This configuration allows overly permissive unpaid execution, exposing the system to abuse and unauthorized access.

Best practice

Properly identify each relevant case, aiming to be as restrictive as possible while also requiring explicit authorization for unpaid execution when necessary:

#![allow(unused)]
fn main() {
pub type Barrier = (
    // Allow XCMs with some computed origins to pass through.
    WithComputedOrigin<
        (
            // If the message is one that immediately attempts to pay for execution, then allow it.
            AllowTopLevelPaidExecutionFrom<OnlyTrustedChains>,
            // Parent, its pluralities (i.e. governance bodies), and the Fellows plurality
            // get free execution.
            AllowExplicitUnpaidExecutionFrom<ParentOrParentsExecutivePlurality>,
            // Subscriptions for version tracking are OK.
            AllowSubscriptionsFrom<ParentRelayOrSiblingParachains>,
        ),
        UniversalLocation,
        ConstU32<8>,
    >,
);
}

Inline comments, as shown above, help clarify the purpose and intent of each rule, ensuring the configuration aligns with security best practices. This approach minimizes vulnerabilities by restricting access and enforcing explicit checks for unpaid execution.

Ensure Consistent Asset Registration by Adhering to Host Chain Schema

Severity: High

Description

Asset representation on other chains must be carefully studied before registering the asset(s) on your chain, as XCMP and pallet-assets do not necessarily limit the possible AssetId assignments. This flexibility could result in registering an asset under a different schema than that used by other users or proposed by the host chain, potentially leading to serious integration issues.

What should be avoided

Consider a system that uses a Location schema to represent both local and foreign assets — i.e., assets originating from other consensus systems.

In this system, native assets from other chains are represented by their Location. Assuming the host chain and Chain X are both connected to the same Relay, the native asset of Chain X would be represented as:

#![allow(unused)]
fn main() {
// In runtime/xcm_config.rs file
pub const AssetOfX: Location = Location::new(1, [Parachain(PARA_ID_OF_X)]);
}

Now, suppose Chain Y integrates with the host system and registers its native asset as follows:

#![allow(unused)]
fn main() {
// In runtime/xcm_config.rs file
pub const AssetOfY: Location = Location::new(1, [Parachain(PARA_ID_OF_Y), GeneralIndex(0)]);
}

While this could be accepted by the host system, it disrupts the established schema. For example, front-ends integrated with the host would now need to account for an additional Junction to retrieve ASSET_OF_Y, creating an exception that complicates integration and consistency across assets.

Best practice

The host’s adopted schema must be carefully studied and followed. In the case of AssetOfY, it should be registered as follows:

#![allow(unused)]
fn main() {
// In runtime/xcm_config.rs file
pub const AssetOfY: Location = Location::new(1, [Parachain(PARA_ID_OF_Y)]);
}

More complex cases, such as pallet-assets' created assets from another system, may have a more intricate Location. Nonetheless, the host should provide a clear schema, which all integrating systems are expected to respect and adhere to.

Make Proper Usage of XCM Junctions

Severity: High

Description

Junctions are components within a Location that specify how to navigate through a chain of entities to reach a specific asset or resource. In Substrate-based blockchains, Junctions are critical for ensuring clarity and precision in cross-consensus message (XCM) communications.

However, due to their flexible design, some junctions, such as GeneralIndex, are prone to misuse. Misusing these junctions can introduce ambiguity or inconsistency in the XCM message, potentially leading to integration issues, incorrect asset handling, or unintended behavior.

When defining Junctions, adhere closely to their intended definitions to maintain clarity and ensure proper usage.

What to avoid

Using GeneralIndex, or any other Junction, for purposes beyond representing the intended entity can result in misuse and ambiguity.

#![allow(unused)]
fn main() {
pub const AmountOfDecimals: u32 = 12;

pub const AssetOfZ: Location = Location::new(1, [Parachain(PARA_ID_OF_Z), GeneralIndex(AmountOfDecimals.into())]);
}

In this example, GeneralIndex is used to send information that describes a characteristic (the number of decimals) rather than representing an actual entity in the path. This deviates from its intended purpose, creating potential confusion and misinterpretation of the XCM message.

Best practice

Use Junctions strictly to represent entities in the Location path, adhering to their intended definitions. Avoid overloading their purpose to convey unrelated metadata or characteristics.

Example

If the goal is to specify an asset within a parachain, use an appropriate Junction, such as GeneralKey, to represent the asset explicitly rather than overloading GeneralIndex.

#![allow(unused)]
fn main() {
pub const AssetOfZ: Location = Location::new(1, [Parachain(PARA_ID_OF_Z), GeneralKey(b"ZAssetKey".to_vec())]);
}

Handling unmet requirements

If a scenario arises where existing Junctions or XCM instructions cannot meet the desired use case, avoid repurposing existing definitions. Instead, propose a new addition through the established governance process. Open an RFC here to request new fields or instructions that align with the intended purpose.

By adhering to these guidelines, developers can ensure the integrity and clarity of XCM messages, preventing misuse and maintaining seamless interoperability between chains.

Medium Severity Issues

Description

Medium-severity issues in context of Polkadot SDK often stem from outdated conventions, insufficient modularity, or inadequate adherence to modern standards. While these issues do not immediately compromise the blockchain's core functionality or security, they can negatively impact runtime performance, maintainability, and metadata quality, which are crucial for efficient interaction with frontends and other ecosystem components. Addressing these issues aligns the runtime with Substrate's best practices, ensuring compatibility with future updates and scalability as the chain evolves.

Implications

Resolving medium-severity issues in a Substrate runtime improves performance, reduces technical debt, and ensures the runtime's metadata is precise and useful for external tools and developers. By proactively addressing these concerns, developers maintain a clean, scalable codebase that adheres to Substrate's evolving standards and facilitates smoother interactions across the ecosystem.

Remove Deprecated Storage Getters

Severity: Medium

Description

The #[pallet::getter] attribute in Substrate is deprecated, and its continued use may lead to compatibility issues with future framework updates. Adopting modern approaches for storage access ensures compatibility with evolving standards. By transitioning away from deprecated getters, developers can maintain cleaner, more forward-compatible code while leveraging direct storage access or custom getter methods tailored to their needs.

What should be avoided

Using #[pallet::getter] to define storage getters can lead to issues with future updates, as shown below:

#![allow(unused)]
fn main() {
#[pallet::storage]
#[pallet::getter(fn deprecated_getter)]
pub type MyValue<T> = StorageValue<_, u32, OptionQuery>;
}

In this example:

  • The #[pallet::getter] attribute defines a deprecated getter function (deprecated_getter), which may no longer be supported in future Substrate versions.

Best practice

Access the storage value directly or use custom functions to handle storage access without relying on deprecated getters:

#![allow(unused)]
fn main() {
#[pallet::storage]
pub type MyValue<T> = StorageValue<_, u32, OptionQuery>;

// Create a custom getter
fn get_my_value() -> Option<u32> {
    MyValue::<T>::get()
}

// Or simply access the storage item directly
fn process_stuff() {
    if let Some(my_value) = MyValue::<T>::get() {
        // Use my_value here
    }
}
}

In this example:

  • The get_my_value function provides controlled access to the storage item without using #[pallet::getter], maintaining compatibility with future updates.
  • The same results can be yielded by simply accessing the storage item straightforwardly.
  • Both approaches ensure that storage access remains up-to-date and adaptable to evolving framework standards.

Avoid Hardcoded Parameters and Values

Severity: Medium

Description

Hardcoding parameters reduces flexibility, making the code less adaptable to changing environments or configurations. In Substrate runtime development, this rigidity can lead to unnecessary re-deployments and upgrades when requirements change. Configurable parameters enhance versatility, allowing runtime adjustments without altering the source code.

A pallet configuration in Polkadot SDK is a set of associated types and constants defined in the Config trait of a pallet. These configurations allow developers to parameterize aspects of the pallet’s behavior, such as limits, thresholds, or external dependencies, at the runtime level. By implementing the Config trait in the runtime, these values can be adjusted dynamically for different environments without modifying or redeploying the pallet’s source code. This makes it an ideal replacement for hardcoded constants, providing flexibility and adaptability to evolving requirements.

What should be avoided

Hardcoding values, such as limits or thresholds, can make it difficult to adapt the code without modifying the source:

#![allow(unused)]
fn main() {
// Hardcoded limit
const LIMIT: u32 = 100;

impl<T: Config> Pallet<T> {
    fn do_something() {
        // No configurable limit
        for i in 0..T::LIMIT {
            // Logic that uses the configurable limit
        }
    }
}
}

In this example:

  • LIMIT is fixed at compile time, meaning any change requires editing the source code, recompiling, and redeploying the runtime. This process is inefficient and can disrupt the network.

Best practice

Use configurable traits to allow parameter adjustments at the runtime level, enhancing flexibility and adaptability:

#![allow(unused)]
fn main() {
// --- In pallet/lib.rs file ---
pub trait Config: frame_system::Config {
    const LIMIT: u32;
}

impl<T: Config> Pallet<T> {
    fn do_something() {
        // Access configurable limit
        for i in 0..T::LIMIT {
            // Logic that uses the configurable limit
        }
    }
}

// --- In runtime/lib.rs file ---
impl some_pallet::Config for Runtime {
    // ...
    type LIMIT = 100;
}
}

With this approach:

  • LIMIT can be defined in the runtime configuration, allowing you to change it without modifying the source code.
  • This setup makes the code adaptable to different environments and easily configurable to meet evolving needs.

Include Tests for Edge Cases

Severity: Medium

Description

Testing is crucial in the Substrate environment because the runtime is deterministic and forms the backbone of the blockchain. Any untested edge case could lead to bugs that propagate across all nodes, potentially halting the chain or causing irreparable state corruption. Since runtime updates require careful governance and coordinated upgrades, fixing errors post-deployment can be complex and time-consuming. Developers should aim for 100% test coverage to ensure that every possible scenario is accounted for, minimizing risks and guaranteeing the runtime behaves predictably and securely under all conditions.

Skipping tests for edge cases can result in unhandled scenarios where inputs approach their limits, leading to potential bugs and unpredictable runtime behavior. Edge cases often expose vulnerabilities that typical inputs might not trigger, making them essential for ensuring the robustness and reliability of the system.

What should be avoided

Neglecting boundary cases in testing may overlook issues that occur at the extremes of expected input ranges:

#![allow(unused)]
fn main() {
#[test]
fn test_process_data() {
    // Typical case
    assert_eq!(process_data(50), Some(50));
}
}

In this example:

  • The function only tests a typical case (50) and misses important edge conditions that could cause issues if unhandled.

Best practice

Include tests for boundary conditions to verify that the code handles edge cases, such as zero, maximum, and just beyond maximum values:

#![allow(unused)]
fn main() {
#[test]
fn test_process_data_with_boundary_cases() {
    // Minimum boundary
    assert_eq!(process_data(0), Some(0));

    // Maximum boundary
    assert_eq!(process_data(MAX_LIMIT), Some(MAX_LIMIT));

    // Beyond maximum boundary
    assert!(process_data(MAX_LIMIT + 1).is_none());
}
}

In this improved example:

  • We test the function at 0 (minimum boundary), MAX_LIMIT (maximum boundary), and MAX_LIMIT + 1 (just beyond the maximum).
  • This ensures that process_data behaves correctly at all crucial boundaries, improving robustness and reducing the risk of bugs in production.

Include Extrinsic Documentation

Severity: Medium

Description

An extrinsic in Substrate is a primary interface for interacting with the blockchain. It represents a transaction or an inherent operation submitted by users or the system itself, allowing them to invoke specific logic defined in the runtime. As extrinsics directly impact the state of the blockchain, ensuring their functionality, clarity, and correctness is crucial for maintaining the network's integrity. Proper documentation of extrinsics provides developers and users with the context needed to understand their purpose, inputs, permissions, and potential outcomes, reducing the risk of misuse or errors.

A lack of documentation for extrinsics can lead to confusion about their functionality, expected inputs, permissions, and possible errors. This makes it harder for developers and users to correctly interact with the runtime. Providing comprehensive documentation for extrinsics ensures clarity, improves usability, and offers valuable insights to both frontend clients and users, reducing the likelihood of misuse or unexpected errors.

What should be avoided

Leaving extrinsics undocumented makes it difficult to understand their behavior, which may lead to misuse or unexpected errors:

#![allow(unused)]
fn main() {
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::transfer())]
pub fn transfer(sender: OriginFor<T>, recipient: AccountId, amount: BalanceOf<T>) {
    // Performs operation with no explanation of parameters, permissions, or errors
}
}

Best practice

Document each extrinsic clearly, detailing its purpose, input parameters, permissions, and potential errors to improve usability and clarity:

#![allow(unused)]
fn main() {
/// Transfers a specified amount from the sender to the recipient.
///
/// # Parameters
/// - `sender`: The origin initiating the transfer.
/// - `recipient`: The account ID of the entity receiving the funds.
/// - `amount`: The amount to be transferred.
///
/// # Permissions
/// Only callable by accounts with sufficient balance to cover the `amount`.
///
/// # Errors
/// - `InsufficientBalance` if the sender's balance is too low.
/// - `InvalidRecipient` if the recipient account is invalid.
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::transfer())]
pub fn transfer(sender: OriginFor<T>, recipient: AccountId, amount: BalanceOf<T>) -> DispatchResult {
    // Function implementation here
}
}

In this example:

  • Each parameter, required permission, and potential error is clearly documented, ensuring users know exactly how to interact with the extrinsic and what conditions to expect.
  • This level of detail minimizes confusion and supports safer, more effective use of the extrinsic.

Include Error Documentation

Severity: Medium

Description

Errors in runtime code are crucial for communicating issues to developers and users. Without proper documentation, understanding the purpose and cause of specific errors can be challenging, leading to longer debugging cycles and potential misinterpretation of error messages. Documenting error variants ensures clarity and provides essential context directly in the codebase, making it easier for developers to identify and address issues efficiently.

What should be avoided

Leaving error variants undocumented can lead to confusion and slow down debugging:

#![allow(unused)]
fn main() {
#[pallet::error]
pub enum Error {
    InsufficientBalance,
    InvalidAsset,
}
}

In this example:

  • No documentation is provided for the error variants inside the Error enum.

Best practice

Add documentation for each error variant to clarify its cause and expected behavior:

#![allow(unused)]
fn main() {
#[pallet::error]
pub enum Error {
    /// Occurs when the user's balance is too low for the operation.
    InsufficientBalance,
    /// Indicates that the specified asset is not recognized.
    InvalidAsset,
}
}

In this example:

  • Documenting each error variant provides more context, helping developers understand the cause and appropriate response for each error.

Provide Event Documentation

Severity: Medium

Description

Events in Substrate serve as a critical communication layer between the runtime and external clients, providing insights into state changes and operations. Proper documentation of events is essential because it is embedded into the runtime's metadata, which can be accessed by frontend developers and users. This documentation helps users understand the purpose and context of emitted events, ensuring that interactions with the blockchain are clear and predictable. Without proper documentation, frontend developers may struggle to display meaningful messages or interpret blockchain actions, leading to a suboptimal user experience. By documenting each event variant, developers create a richer, more user-friendly ecosystem that bridges the gap between the runtime and external interfaces.

What should be avoided

Emitting events without descriptions or explanations:

#![allow(unused)]
fn main() {
#[pallet::event]
#[pallet::generate_deposit(pub(super) fn deposit_event)]
pub type Event<T: Config> {
    AccountCreated,
    AccountUpdated,
    AccountDeleted,
    AccountLocked,
}
}

In this example:

  • No documentation is provided for the event variants inside the Event enum.

Best practice

Provide detailed comments for each event to explain its purpose and usage:

#![allow(unused)]
fn main() {
#[pallet::event]
#[pallet::generate_deposit(pub(super) fn deposit_event)]
pub type Event<T: Config> {
    /// An account was just created. This event gets triggered when the account
    /// has a balance greater than the existential deposit for the first time.
    AccountCreated,
    /// The account got its balance updated, but over the existential deposit.
    AccountUpdated,
    /// The account got its balance under the existential deposit and its
    /// storage was completely wiped out.
    AccountDeleted,
    /// The account got blocked due to suspicious activities or misbehabiour.
    AccountLocked,
}
}

Provide Pallet Configuration Documentation

Severity: Medium

Description

Pallet configuration items (types, constants, or associated items within the Config trait) define critical customization points for integrating a pallet into a runtime. Proper documentation for these items is essential, as their descriptions are included in the runtime's metadata, which is accessible to frontend developers, runtime integrators, and other tools interacting with the blockchain. Clear documentation helps developers understand the purpose, constraints, and expected usage of configuration items, reducing the risk of misconfigurations. It also enables frontend developers to provide better user experiences by offering contextual information about the runtime’s configuration. Ensuring thorough documentation promotes clarity, collaboration, and robust integration within the Substrate ecosystem.

What should be avoided

Leaving pallet configuration items undocumented leads to ambiguity and reduces maintainability:

#![allow(unused)]
fn main() {
pub trait Config: frame_system::Config {
    type Currency: Inspect<Self::AccountId> + Mutate<Self::AccountId>;
    type MaxSize: Get<u32>;
}
}

In this example:

  • The Currency and MaxSize configuration items lack documentation, making their usage unclear to developers implementing the pallet.

Best practice

Document each pallet configuration item with a brief description of its purpose and constraints:

#![allow(unused)]
fn main() {
/// Configuration trait for the pallet.
pub trait Config: frame_system::Config {
    /// The currency mechanism for handling balances in the system.
    type Currency: Inspect<Self::AccountId> + Mutate<Self::AccountId>;

    /// Maximum allowed size for operations in this pallet.
    /// This value must be set according to expected workload and resource limits.
    type MaxSize: Get<u32>;
}
}

Documenting the pallet's configuration has lots of benefits:

  1. Improved Clarity: Developers implementing the pallet can easily understand the purpose and expected usage of configuration items.
  2. Reduced Misconfigurations: Clear documentation minimizes the risk of setting inappropriate values or types during runtime configuration.
  3. Better Collaboration: Well-documented configuration traits facilitate onboarding of new contributors and reduce the learning curve for your codebase.

By ensuring proper documentation for configuration items, you enhance the maintainability, usability, and robustness of your pallet.

Modularize Large Files

Severity: Medium

Description

In Substrate development, projects often contain a lib.rs file of a pallet or runtime configuration that is excessively large as it accumulates all logic, types, and configurations in one place. This is common in older templates but makes the codebase harder to navigate, understand, and maintain. Modern Substrate templates showcase best practices by modularizing lib.rs into smaller, purpose-specific modules, significantly improving code readability and scalability. Developers are encouraged to follow these newer conventions to align with contemporary standards and ensure their pallets are easier to maintain and extend.

What should be avoided

A single large file with multiple unrelated functions and types can become hard to work with:

#![allow(unused)]
fn main() {
// lib.rs (single large file with all logic and types)
fn process_transaction() -> Result<Transaction, Error> { /* transaction processing logic */ }

fn calculate_fees() -> u32 { /* fee calculation logic */ }

fn validate_data() -> Result<(), Error> { /* data validation logic */ }

pub struct SomeStruct {
  // ...
}

pub type MyType = BoundedVec<u8, ConstU32<8>>;

pub enum UsefulEnum {
  // ...
}
}

In this example:

  • All functionality and types is packed into one file, making it difficult to navigate and locate specific parts of the code.

Best practice

Split the code into smaller, purpose-specific modules to improve organization and readability:

#![allow(unused)]
fn main() {
// ./lib.rs (main module file)
mod helpers;
mod types;

// --- ./helpers.rs ---
pub fn process_transaction() -> Result<Transaction, Error> { /* transaction processing logic */ }

pub fn calculate_fees() -> u32 { /* fee calculation logic */ }

pub fn validate_data() -> Result<(), Error> { /* data validation logic */ }

// --- ./types.rs ---
pub struct SomeStruct{
  // ...
}

pub type MyType = BoundedVec<u8, ConstU32<20>>;

pub enum UsefulEnum {
  // ...
}
}

In this modularized structure:

  • Each module (transaction.rs, fees.rs, validation.rs) contains related functions, making the codebase more organized and easier to navigate.
  • Developers can work on specific modules without wading through unrelated code, enhancing maintainability and team collaboration.

Break Down Complex Functions

Severity: Medium

Description

Complex functions with multiple responsibilities are harder to test, understand, and maintain, increasing the risk of errors and making debugging more difficult. In Substrate runtime development, where precise logic is critical for the correct functioning of the blockchain, overly complex functions can lead to bugs that are challenging to identify and resolve, potentially impacting the entire network.

What should be avoided

Combining multiple responsibilities in a single function increases its complexity:

#![allow(unused)]
fn main() {
fn process_transaction() {
    // Transaction validation code
    // ...

    // Fee calculation code
    // ...

    // Balance update code
    // ...

    // Update the storage
    // ...
}
}

In this example:

  • The function mixes validation, fee calculation, balance updates, and storage modifications, making it difficult to pinpoint the source of issues or extend the logic without introducing errors.

Best practice

Apply the single responsibility principle by breaking down the function into smaller, focused functions:

#![allow(unused)]
fn main() {
fn process_transaction() {
    validate_transaction();
    apply_fees();
    update_balances();
    record_transaction();
}
}

This approach simplifies each function, making it easier to test and understand.

Enhance Performance with Efficient Data Structures

Severity: Medium

Description

Choosing inefficient data structures can lead to suboptimal performance, especially as data scales, causing slowdowns and increased resource usage. In Substrate runtime development, where performance is critical, inefficient operations can increase block execution time, reduce throughput, and lead to higher transaction costs.

This is a broad concept, and developers should select data structures suited to their specific use cases. Below are basic examples to highlight common pitfalls and improvements.

What should be avoided

Using a Vec for frequent lookups is often inefficient, as it requires linear-time search, which can significantly slow down performance with large data sets:

#![allow(unused)]
fn main() {
// Linear search
fn is_data_present(data: Vec<u32>, item: u32) -> bool {
    data.contains(&item)
}
}

In this example:

  • The Vec requires scanning all elements in the worst case, leading to poor performance as the data set grows.

Best practice

Option 1: Use more performant data structures

Data structures like HashSet or BTreeSet are optimized for faster lookups, with constant or logarithmic time complexity, respectively. These are better suited for scenarios requiring frequent membership checks or updates.

#![allow(unused)]
fn main() {
use std::collections::HashSet;

// Constant or logarithmic time search
fn is_data_present(set_data: HashSet<u32>, item: u32) -> bool {
    set_data.contains(&item)
}
}

In this example:

  • set_data enables faster and more efficient lookups, improving performance for large data sets.

If you must use a Vec, keeping it sorted allows for binary search, which reduces the time complexity to logarithmic.

#![allow(unused)]
fn main() {
// Binary search provides a logarithmic time search.
// The array MUST be sorted.
fn is_data_present(vec_data: &[u32], item: u32) -> bool {
    vec_data.binary_search(&item).is_ok()
}
}

In this example:

  • A sorted Vec allows for efficient binary search, combining the simplicity of a Vec with improved lookup performance.

By choosing the right data structure or optimizing the existing one, developers can ensure efficient execution, scalability, and reduced resource usage in their runtime logic.

Define Constants to Replace Magic Numbers

Severity: Medium

Description

Magic numbers (unexplained numeric constants) make code difficult to understand and maintain because their purpose is often unclear without additional context or comments. In blockchain runtime development, where clarity is crucial to ensure correct behavior and security, such practices can lead to confusion, errors, or unintended consequences during updates or audits.

What should be avoided

Hardcoding numeric constants directly in the code makes their intent unclear:

#![allow(unused)]
fn main() {
// What does 7% represent?
let discount = price * Percent::from_percent(7);
}

In this example:

  • The meaning of 7 is ambiguous. Without context, other developers (or even the original author) may struggle to understand what it represents, increasing the risk of misinterpretation or misuse.

Best practice

Define constants with descriptive names to clarify their purpose and make code easier to read:

#![allow(unused)]
fn main() {
const DISCOUNT_RATE: Percent = Percent::from_percent(7);

let discount = price * DISCOUNT_RATE;
}

This approach improves readability and maintainability by making the purpose of constants explicit.

Implement Proper Interface Segregation

Severity: Medium

Description

Overloading interfaces with too many responsibilities creates unnecessary complexity, making code harder to understand, test, and maintain. The Interface Segregation Principle (ISP) advocates for designing smaller, more focused interfaces that are easier to manage and adapt. This principle ensures that implementing types are not burdened with methods they do not use, promoting modular and reusable code.

What should be avoided

Using a single interface (or trait) that encompasses multiple, unrelated responsibilities increases complexity and creates tight coupling:

#![allow(unused)]
fn main() {
trait FullService {
    fn save();
    fn load();
    fn process();
}
}

In this example:

  • The FullService trait combines unrelated methods for saving, loading, and processing. Any type implementing this trait would be forced to define all three methods, even if it only requires one or two of these responsibilities, leading to unnecessary dependencies.

Best practice

Break down interfaces into smaller, focused traits to simplify code and improve modularity:

#![allow(unused)]
fn main() {
trait Save {
    fn save();
}

trait Load {
    fn load();
}

trait Process {
    fn process();
}
}

In this example, the FeeManager accepts WaivedLocations that are exempt from fees and transfers any charged fees to a StakingPot account.

Make BoundedVec Size Configurable

Severity: Medium

Description

In Substrate, using a fixed size for BoundedVec restricts flexibility and makes it harder to adapt to changing requirements. A hardcoded size is often not sufficient as the blockchain evolves, requiring updates that may lead to redeployments and governance interventions. By making the size configurable through the Config trait, developers can enhance maintainability and adaptability, enabling runtime-specific adjustments without altering the core pallet logic.

What should be avoided

Using a fixed size for BondedVec without a configurable option restricts adaptability:

#![allow(unused)]
fn main() {
// Hardcoded size

#[pallet::storage]
pub type Domain = BondedVec<u8, ConstU32<256>>;
}

Best practice

Define the size as a configurable parameter within the Config trait, which provides flexibility for future changes:

#![allow(unused)]
fn main() {
// --- In pallet/lib.rs file ----
#[pallet::config]
pub trait Config: frame_system::Config {
    type MaxDomainSize: Get<u32>;
}

#[pallet::storage]
pub type Domain<T> = BoundedVec<u8, T::MaxDomainSize>;


// --- In runtime/lib.rs file ---
impl some_pallet::Config for Runtime {
    ...
    type MaxDomainSize = ConstU32<256>;
}
}

This approach allows the MaxDomainSize to be defined in the runtime configuration, making the code adaptable and easier to maintain.

Enhance Logging in Migration Scripts

Severity: Medium

Description

Logging messages in migration scripts should provide clear and specific information about the migration process. In Substrate-based blockchains, where migrations often involve updating on-chain state, detailed logs are critical for tracking progress, diagnosing issues, and ensuring transparency during the process. Insufficient logging can make it difficult to pinpoint errors or understand the steps taken during a migration.

What should be avoided

Using general log messages in migration scripts provides minimal information:

#![allow(unused)]
fn main() {
log::info!("Migration started");
}

In this example:

  • The log message gives no indication of what migration is running, what data is being processed, or whether specific conditions were met, making debugging and tracking nearly impossible.

Best practice

Use more detailed logging, including migration-specific information and conditions:

#![allow(unused)]
fn main() {
log::info!("Migration started for version: {}", current_version);
if let Some(bucket) = translate(process) {
    log::info!("Translated bucket data for migration: {:?}", bucket);
} else {
    log::warn!("Translation process returned None for bucket data");
}
}

In this example:

  • Each log message includes specific information, making it easier to trace migration steps and identify issues if they occur.

Avoid Redundant Data Structures

Severity: Medium

Description

Storing the same information in multiple storage structures leads to redundancy, which increases storage complexity and usage. In Substrate runtimes, where every storage access contributes to execution weight, redundant storage not only wastes resources but also introduces risks of data inconsistency. Synchronizing multiple storage locations adds unnecessary complexity to the logic and increases the likelihood of errors, making maintenance more challenging.

What should be avoided

Avoid creating separate data structures or fields that duplicate data already present within another structure, as this can lead to inefficient data management.

#![allow(unused)]
fn main() {
pub struct Info<AccountId, BlockNumber> {
	pub owner: AccountId,
	pub valid_until: BlockNumber,
	pub more_data_1: u32,
	pub more_data_2: u32,
}

#[pallet::storage]
pub type SomeInfo<T: Config> =
	StorageMap<_, Blake2_128Concat, SomeId, Info<T::AccountId, BlockNumberFor<T>>, OptionQuery>;

#[pallet::storage]
pub type InfoOwner<T: Config> = StorageMap<_, Blake2_128Concat, SomeId, T::AccountId, OptionQuery>;
}

In this example:

  • The InfoOwner storage map is redundant since the owner is already stored within the Info structure that is part of the SomeInfo storage map.

Best practice

To prevent redundant storage usage, maintain data within a single structure or storage map whenever possible. If data needs to be accessed frequently, consider optimizing retrieval methods rather than duplicating data in multiple places.

#![allow(unused)]
fn main() {
pub struct Info<AccountId, BlockNumber> {
	pub owner: AccountId,
	pub valid_until: BlockNumber,
	pub more_data_1: u32,
	pub more_data_2: u32,
}

#[pallet::storage]
pub type SomeInfo<T: Config> =
	StorageMap<_, Blake2_128Concat, SomeId, Info<T::AccountId, BlockNumberFor<T>>, OptionQuery>;
}

In this example:

  • The InfoOwner storage map was deleted to remove storage redundancy. Retrieval of the owner field can now be performed directly from the Info structure within SomeInfo, ensuring consistency and reducing maintenance complexity.

Implement Tests For All Error Cases

Severity: Medium

Description

Omitting tests for error cases in extrinsics can leave critical failure paths unverified, leading to unhandled scenarios and unexpected behavior in runtime execution. By neglecting to test that specific errors are emitted under invalid conditions, the robustness and predictability of the runtime can be compromised, especially under edge cases.

What should be avoided

Avoid leaving out tests that validate the expected errors when certain invalid inputs or conditions are encountered.

#![allow(unused)]
fn main() {
// lib.rs
#[pallet::storage]
pub type ManagerAccount<T: Config> = StorageValue<_, T::AccountId>;

#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::do_something())]
pub fn do_something(origin: OriginFor<T>, input: u32) -> DispatchResult {
    let who = ensure_signed(origin)?;

    ensure!(who == ManagerAccount::<T>::get(), Error::<T>::UnexpectedAccount);
    ensure!(input > 0, Error::<T>::IncorrectInput);
    
    // Rest of the logic

    Ok(())
}

// tests.rs
#[test]
fn do_something_works() {
    let right_account = ManagerAccount::<Test>::get();
    let valid_input = 5;

    assert_ok!(SomethingPallet::<Test>::do_something(
        RuntimeOrigin::signed(right_account),
        valid_input
    ));
}
}

In this example:

  • The test only confirms that the function succeeds with valid inputs. It does not check if proper errors are triggered when invalid inputs or accounts are provided.

Best practice

Include tests that verify all error cases the extrinsic is expected to handle. This ensures predictable behavior even under incorrect inputs, improving the reliability and resilience of the runtime.

#![allow(unused)]
fn main() {
#[test]
fn do_something_works() {
    let right_account = ManagerAccount::<Test>::get();
    let valid_input = 5;

    assert_ok!(SomethingPallet::<Test>::do_something(
        RuntimeOrigin::signed(right_account),
        valid_input
    ));
}

#[test]
fn do_something_fails_if_wrong_account() {
    let wrong_account = get_wrong_account();
    let valid_input = 5;

    assert_noop!(SomethingPallet::<Test>::do_something(
        RuntimeOrigin::signed(wrong_account),
        valid_input
    ), Error::<Test>::UnexpectedAccount);
}

#[test]
fn do_something_fails_if_invalid_input() {
    let right_account = ManagerAccount::<Test>::get();
    let wrong_input = 0;

    assert_noop!(SomethingPallet::<Test>::do_something(
        RuntimeOrigin::signed(right_account),
        wrong_input
    ), Error::<Test>::IncorrectInput);
}
}

Avoid Resource Intensive Execution Inside Hooks

Severity: Medium

Description

Performing resource-intensive operations, such as iterating over large data sets, within runtime hooks like on_finalize can significantly impact block execution time and lead to performance bottlenecks. Hooks are executed automatically for every block, and if they contain computationally heavy tasks, they can reduce transaction throughput and degrade network performance, especially as on-chain activity scales.

What should be avoided

Avoid conducting heavy computations or iterating through extensive data in hooks, as this adds unnecessary workload to every block. For example, consider the following inefficient approach in on_finalize, which processes votes for each proposal that ends within a block:

#![allow(unused)]
fn main() {
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> {
    fn on_finalize(block_number: T::BlockNumber) {
        // Retrieve proposals that have ended at this block number
        let proposals = EndedProposals::<T>::get(block_number);

        for proposal_id in proposals {
            let mut ayes = 0;
            let mut nays = 0;

            // Retrieve all votes for the current proposal
            let votes = Votes::<T>::get(proposal_id);

            for vote in votes {
                match vote {
                    VoteType::Aye => ayes = ayes.saturating_add(1),
                    VoteType::Nay => nays = nays.saturating_add(1)
                }
            }

            // Process `ayes` and `nays` results as needed
        }
    }
}
}

In this example:

  • Counting votes for each finalized proposal during on_finalize leads to high resource usage and may exceed block weight limits, especially as the number of proposals and votes grows.

Best practice

Optimize by performing the calculations within the extrinsics, maintaining incremental counters in storage, or enabling users to trigger the logic explicitly outside the hooks.

Option 1: Perform the execution within the extrinsic

Track necessary values as they are submitted, distributing the computation workload across each transaction.

#![allow(unused)]
fn main() {
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::some_vote())]
pub fn some_vote(
    origin: OriginFor<T>,
    vote: VoteType
) -> DispatchResult {
    // Verification logic
    ProposalVoteAmount::<T>::mutate(id, |item| -> Result<(), Error> {
        match vote {
            VoteType::Aye => *item.ayes = item.ayes.saturating_add(1),
            VoteType::Nay => *item.nays = item.nays.saturating_add(1),
        }
    })?;

    Ok(())
}

fn on_finalize(block_number: T::BlockNumber) {
    // Retrieve proposals that have ended at this block number
    let proposals = EndedProposals::<T>::get(block_number);

    for proposal_id in proposals {
        let (ayes, nays) = ProposalVoteAmount::<T>::get(proposal_id);
        // Process `ayes` and `nays` results as needed
    }
}
}

Option 2: Allow users to trigger the logic explicitly

Allow users to close/finalize manually by explicitly calling an extrinsic when necessary, which avoids performing the work automatically in on_finalize.

#![allow(unused)]
fn main() {
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::close_proposal())]
pub fn close_proposal(
    origin: OriginFor<T>,
    proposal_id: u32
) -> DispatchResult {
    let proposal = Proposals::<T>::get(proposal_id);
    ensure!(proposal.end_block < current_block, Error::<T>::ProposalHasNotEnded);

    let mut ayes = 0;
    let mut nays = 0;

    let votes = Votes::<T>::get(proposal_id);

    for vote in votes.iter() {
        match vote {
            VoteType::Aye => ayes = ayes.saturating_add(1),
            VoteType::Nay => nays = nays.saturating_add(1),
        }
    }

    // Process `ayes` and `nays` results as needed
    Ok(())
}
}

These approaches shift heavy computations away from automatic hooks and ensure more efficient block execution while maintaining network performance.

Transition Away from Currency Trait

Severity: Medium

Description

The Currency trait in Substrate is deprecated and should no longer be used in new implementations. Continuing to rely on deprecated traits risks compatibility issues with future framework updates and limits access to newer, more flexible features. Transitioning to the recommended traits, such as those in the fungible module (Inspect, Mutate, ...), ensures forward compatibility, aligns with modern development practices, and leverages the latest improvements in the Substrate ecosystem.

What should be avoided

Avoid using the deprecated Currency trait in new implementations:

#![allow(unused)]
fn main() {
use frame_support::traits::Currency;

#[pallet::config]
pub trait Config: frame_system::Config {
    // Currency trait is deprecated
    type Currency: Currency<Self::AccountId>;
}
}

In this example:

  • The Currency trait is used to handle account balances. This trait is already deprecated and should not be used.

Best practice

The usage of fungible traits is preferred instead.

#![allow(unused)]
fn main() {
// Import the traits from the fungible module
use frame_support::traits::fungible::{Inspect, Mutate};

#[pallet::config]
pub trait Config: frame_system::Config {
    // Use the preferred traits
	type Currency: Inspect<Self::AccountId> + Mutate<Self::AccountId>;
}
}

Avoid Unrestricted XCM Execution

Severity: Medium

Description

It is not recommended to allow arbitrary users to send unrestricted instructions via XCM, especially given the presence of the Transact operation, which enables extrinsic execution through this method.

While standard security measures for extrinsics generally apply here, certain vulnerabilities may still arise. Therefore, it is advisable to be as restrictive as possible unless a specific need justifies a more permissive approach, such as when the chain must regularly register assets on other chains.

What should be avoided

The following code allows extrinsics with XCM instructions to pass unrestricted through pallet-xcm:

#![allow(unused)]
fn main() {
type XcmExecuteFilter = Everything;
}

This configuration enables all locations and instructions to execute XCM extrinsics without any filtering, creating a significant attack surface.

Best practice

Restrict XCM execution to either no operations or to privileged users with clear justification. This limits the risk of exploitation and ensures that only authorized instructions are executed.

Example 1

#![allow(unused)]
fn main() {
type XcmExecuteFilter = Nothing;
}

In this example:

  • Setting the XcmExecuteFilter to Nothing disables the execution of extrinsics through XCM for all locations, ensuring a completely restricted environment.

Example 2

#![allow(unused)]
fn main() {
pub struct CustomExecuteFilter;
impl<XcmCall> Contains<(Location, XcmCall)> for CustomExecuteFilter {
    fn contains(t: &(Location, XcmCall)) -> bool {
        let some_origin = SomeLocation::get();

        // Check if the origin location is the desired one.
        matches!(&t.0, some_origin)
    }
}

type XcmExecuteFilter = CustomExecuteFilter;
}

In this example:

  • A custom filter, CustomExecuteFilter, is defined to enforce execution restrictions based on the origin location. Only instructions originating from the specified SomeLocation are permitted, ensuring tight control over which sources can execute extrinsics via XCM.

By applying these restrictions, the execution of XCM instructions is securely controlled, reducing the risk of misuse or unintended operations.

Implement Proper XCM Fee Management

Severity: Medium

Description

The FeeManager trait is used to manage fees for executing XCM messages. When properly configured, it allows for fees to be distributed to specified accounts or components. However, if FeeManager is set to the empty tuple type (), all fees will be burned.

What should be avoided

Setting FeeManager to the unit type () should be done with caution. This setting will automatically burn all fees collected.

#![allow(unused)]
fn main() {
// Fees will be burned.
type FeeManager = ();
}

In this example:

  • FeeManager is set to (), meaning that there is no mechanism to process or allocate the collected fees, causing them to be automatically burned.

Best practice

Configure FeeManager to allow fees to be either deposited or distributed.

#![allow(unused)]
fn main() {
// Fees will be deposited into an account.
type FeeManager = XcmFeeManagerFromComponents<
    WaivedLocations,
    XcmFeeToAccountId20<Self::AssetTransactor, AccountId, StakingPot>,
>;
}

In this example, the FeeManager accepts WaivedLocations that are exempt from fees and transfers any charged fees to a StakingPot account.

Low Severity Issues

Description

Low-severity issues in context of Polkadot SDK are generally minor concerns that do not compromise the core functionality or security of the blockchain. These issues often relate to inefficiencies, inconsistent naming conventions, or areas where readability can be improved. While they are not critical, addressing these issues ensures the code remains clean, maintainable, and easy to understand. Resolving low-severity issues can enhance developer productivity, improve the quality of the codebase, and contribute to smoother long-term development and collaboration.

Implications

Although low-severity issues do not have an immediate impact on the system, addressing them during regular refactoring or maintenance helps prevent code clutter from accumulating. Over time, resolving these issues contributes to a more efficient, readable, and maintainable codebase, improving developer experience and overall project sustainability.

Use Appropriate Naming Conventions

Severity: Low

Description

Inconsistent or unclear naming conventions can create confusion, making the codebase harder to read and maintain. Vague function, variable, or type names can slow down development and increase the likelihood of errors, as developers may struggle to understand the purpose of various components. Clear and consistent naming conventions improve code readability, reduce ambiguity, and help maintain an organized codebase that is easier to work with, especially as the project grows. By adhering to proper naming conventions, developers ensure that their code is intuitive, maintainable, and scalable.

What should be avoided

Vague or inconsistent naming conventions make it unclear what a function, variable, or type is supposed to do. Here are several examples of poor naming practices:

Functions

#![allow(unused)]
fn main() {
// Function name lacks specificity.
// Also, it does not follow the snake-case naming convention.
fn processData() {
    // ...
}

// Function name is too generic.
fn doStuff() {
    // ...
}
}

Variables

#![allow(unused)]
fn main() {
// Variable name 'x' does not convey its purpose.
let x = 10;

// 'tmp' is not descriptive of its role.
let tmp = calculate_total();
}

Types

#![allow(unused)]
fn main() {
// The type alias 'B' provides no clarity about its purpose.
type B = u32;

// Struct and field names lack descriptive context.
struct Info {
    count: u32,
    extra_data: Vec<u8>,
}
}

In these examples:

  • The function names processData and doStuff are too vague, providing no insight into their purpose.
  • The variable names x and tmp do not describe the data they hold, making the code harder to follow.
  • The type alias T and struct Info lack meaningful context, obscuring their intent.

Best practice

Use clear and descriptive names that convey the purpose and role of each function, variable, or type, and follow consistent naming styles:

Functions

#![allow(unused)]
fn main() {
// Descriptive function name specifies that it processes transaction data.
fn process_transaction_data() {
    // ...
}

// Descriptive name clearly conveys the function's purpose.
fn calculate_user_balance() {
    // ...
}
}

Variables

#![allow(unused)]
fn main() {
// Variable name 'total_price' indicates its role in the calculation.
let total_price = 10;

// Descriptive variable name reflects its purpose
let updated_balance = calculate_total();
}

Types

#![allow(unused)]
fn main() {
// Provides context that the alias represents a user's balance.
type AccountBalance = u32;

// Meaningful struct and field names improve clarity.
struct UserDetails {
    age: u32,
    email: Vec<u8>,
    balance: AccountBalance,
}
}

This approach helps improve readability and consistency across the codebase, making it easier for developers to understand and maintain the code.

Avoid Unnecessary Cloning

Severity: Low

Description

Unnecessary cloning and unused code increase memory usage and processing overhead, leading to inefficiencies in Substrate runtime development. Each clone operation results in additional memory allocations, which can quickly add up, especially in resource-constrained environments. By avoiding unnecessary cloning and ensuring that data is accessed directly through references, developers can improve both memory efficiency and performance, maintaining a cleaner and more optimized codebase.

What should be avoided

Cloning data unnecessarily creates additional memory allocations, as shown here:

#![allow(unused)]
fn main() {
fn process_data(data: &Vec<u32>) {
    // Cloning the entire vector unnecessarily
    let cloned_data = data.clone();

    for elem in cloned_data {
        // processing the element
    }
}
}

In this example:

  • The entire data vector is cloned, doubling the memory usage even if the original data can be processed directly or accessed via reference.

Best practice

Use references to avoid unnecessary cloning, and review code for unused or redundant sections regularly to keep the codebase lean:

#![allow(unused)]
fn main() {
fn process_data(data: &Vec<u32>) {
    // Process data directly via reference without cloning
    for elem in data {
        // elem is a reference here
    }
}
}

This approach eliminates the need for additional memory allocation, making the code more efficient and easier to maintain. By using references, you reduce both memory usage and potential performance overhead.

Avoid Hardcoded Error Messages

Severity: Low

Description

Hardcoding error messages directly in Substrate code can make it difficult to manage and update error handling across the runtime. When error messages are embedded within function logic, localization becomes cumbersome, and updating messages in the future may lead to inconsistencies. By using enums for error handling, developers can centralize and standardize error messages, making the code more flexible, easier to maintain, and adaptable to future changes, including localization for different languages or regions.

What should be avoided

Embedding error messages directly in function logic can be inflexible:

#![allow(unused)]
fn main() {
fn something_fails() -> Result<(), Error> {
    // ...
    Err("Insufficient balance")
}
}

What can be done instead

Store error messages in a centralized location or use an enum for error handling:

#![allow(unused)]
fn main() {
#[pallet::error]
pub enum Error<T> {
	/// The account does not have enough balance.
	InsufficientBalance,
}

fn something_fails() -> DispatchError<()> {
    // ...
    Err(Error::<T>::InsufficientBalance)
}
}

This approach makes error handling more flexible, allowing for easier updates and localization.

Adopt Enums for Optional Input

Severity: Low

Description

In Substrate development, using basic types (such as strings or integers) to represent distinct categories or optional input can lead to subtle bugs, such as typos or confusion in handling different states. Enums, on the other hand, offer a more structured approach, making it clear which values are valid and reducing the risk of errors. Using enums for representing states or categories improves both code readability and robustness, particularly in large codebases where clarity is essential.

What should be avoided

Using a string or integer to represent distinct categories increases the risk of typos and confusion:

#![allow(unused)]
fn main() {
// Using string literals for statuses
let mut status = "pending";

status = if condition {
    "completed"
} else {
    "aborted"
}
}

Best practice

Define an enum to represent distinct categories or statuses more clearly:

#![allow(unused)]
fn main() {
enum Status {
    /// The operation is still being confirmed and can potentially be aborted.
    Pending,
    /// The operation has been confirmed and it is waiting to be completed.
    InProgress,
    // The operation was successfully completed.
    Completed,
    /// The operation was manually aborted by the user.
    Aborted,
}

let mut status = Status::Pending;

status = if condition {
    Status::Completed
} else {
    Status::Aborted
}
}

Enums make the code more robust and less prone to errors, improving readability.

Implement Descriptive Logging

Severity: Low

Description

In Substrate runtime development, logging is an essential tool for monitoring and debugging. Without descriptive and contextual logging, it becomes difficult to trace the flow of operations or identify issues effectively. Log messages that lack sufficient detail can make troubleshooting slow and inefficient, especially when dealing with complex or production-grade systems. By implementing descriptive logging, developers provide valuable insights that can help quickly diagnose problems, track system behavior, and improve the overall maintainability and observability of the blockchain runtime.

What should be avoided

Logging messages without context or specific details can make troubleshooting challenging and time-consuming:

#![allow(unused)]
fn main() {
log::info!("Process started");
}

Best practice

Add context and relevant details to log messages to improve clarity and facilitate debugging:

#![allow(unused)]
fn main() {
const LOG_TARGET: &str = "pallet-logging";
log::info!(LOG_TARGET, "Process started for user: {:?}", user_id);
}

This approach enhances traceability, readability, and the overall effectiveness of the logging system by providing meaningful information in each log entry.

Remove Unnecessary Return Values

Severity: Low

Description

Returning unnecessary values in functions can clutter the code, making it harder to read and maintain. Unused return values do not add value and can lead to confusion about the function's purpose. By removing redundant return values, developers can simplify function signatures, reduce cognitive load, and make the code more efficient. This practice also enhances the clarity of the function’s intent, helping future developers understand the logic more easily and ensuring a cleaner codebase.

What should be avoided

Returning an input parameter without modification is redundant and can be confusing:

#![allow(unused)]
fn main() {
fn project_validation(project_metadata: Metadata) -> Result<Metadata, Error> {
    validate_metadata(&metadata)?;

    // Returns the same value without changes
    project_metadata
}
}

Best practice

If the return value is not needed, avoid returning it, focusing the function solely on its primary operation:

#![allow(unused)]
fn main() {
fn project_validation(project_metadata: &Metadata) -> Result<(), Error> {
    // Perform validation without returning project_metadata
    validate_metadata(metadata)?;

    Ok(())
}
}

In this approach:

  • The function is more straightforward, focusing only on validation without an unnecessary return value, improving clarity and maintainability.

Avoid Repetitive Generic Type Instantiation

Severity: Low

Description

Repeatedly defining complex generic types in Substrate can make the code unnecessarily verbose and harder to maintain. When a specific instance of a generic type is needed multiple times, repeating its definition can clutter the code, increase the risk of errors, and make it more challenging to update the type when changes are required. By using type aliases for complex generic types, developers can simplify the code, improve readability, and make it easier to maintain by ensuring that the type is only defined once.

What should be avoided

Avoid duplicating the entire definition of a complex type instance across multiple places in the code.

#![allow(unused)]
fn main() {
pub struct SomeInfo<BoundedNameString, BoundedDescString, InfoType, StatusType, AccountId, BlockNumber, MaxReasonLength> {
    pub name: BoundedNameString,
    pub description: BoundedDescString,
    pub info_type: InfoType,
    pub status: StatusType,
    pub created_by: AccountId,
    pub created_at: BlockNumber,
    pub result: Result,
    pub reason: MaxReasonLength,
}

#[pallet::storage]
pub type InfoStorage<T: Config> = StorageMap<
    _,
    Blake2_128Concat,
    T::SomeId,
    SomeInfo<
        BoundedVec<u8, T::MaxNameLength>,
        BoundedVec<u8, T::MaxDescLength>,
        T::Info,
        T::Status,
        T::AccountId,
        BlockNumberFor<T>,
        BoundedVec<u8, T::MaxReasonLength>
    >,
    ValueQuery
>;

pub fn do_something(
    origin: OriginFor<T>,
    info: SomeInfo<
        BoundedVec<u8, T::MaxNameLength>,
        BoundedVec<u8, T::MaxDescLength>,
        T::Info,
        T::Status,
        T::AccountId,
        BlockNumberFor<T>,
        BoundedVec<u8, T::MaxReasonLength>
    >
) -> DispatchResult { /* Rest of the logic */ }
}

In this example:

  • The SomeInfo generic type is defined twice with the same parameters, making the code repetitive and harder to maintain.

Best practice

Define a type alias for the specific instance of the generic type, and reuse this alias throughout the code. By creating a single type alias, such as SomeInfoOf<T>, for the specific instance, you can reference it without repeating its full definition.

#![allow(unused)]
fn main() {
pub type SomeInfoOf<T> = SomeInfo<
    BoundedVec<u8, T::MaxNameLength>,
    BoundedVec<u8, T::MaxDescLength>,
    T::Info,
    T::Status,
    T::AccountId,
    BlockNumberFor<T>,
    BoundedVec<u8, T::MaxReasonLength>
>

#[pallet::storage]
pub type InfoStorage<T: Config> = StorageMap<_, Blake2_128Concat, T::SomeId, SomeInfoOf<T>, ValueQuery>;

pub fn do_something(
    origin: OriginFor<T>,
    info: SomeInfoOf<T>
) -> DispatchResult { /* Rest of the logic */ }
}

Update Benchmarks With Latest Syntax

Severity: Low

Description

Substrate's benchmarking system has evolved significantly between versions 1 and 2, with v2 introducing major improvements in flexibility, clarity, and modularity. In v1, the benchmarks! macro was used to define benchmarks in a less modular, more embedded format, making the code harder to maintain and update. The introduction of the #[benchmarks] attribute in v2 offers a cleaner, more organized approach. This new syntax separates setup, execution, and verification into distinct functions, improving code readability and making it easier to implement and update benchmarks as the Substrate framework evolves. Transitioning to v2 ensures compatibility with the latest tooling and enhances the overall development experience, providing a more robust foundation for performance testing in Substrate runtimes.

What should be avoided

#![allow(unused)]
fn main() {
benchmarks! {
	do_something {
        // Pre-benchmarking setup
        // ...
	}: _(RawOrigin::Signed(caller), data)
	verify {
        // Post-execution verifications
		// ...
	}
}

In this example:

  • The benchmark setup and verification steps are embedded directly within the benchmarks! macro, which is now deprecated in favor of more modular and explicit syntax.

Best practice

Use the new #[benchmarks] attribute syntax to define benchmarks in a more modular and explicit way. This structure improves code organization by separating each benchmark into its own function with a more comprehensive syntax.

#![allow(unused)]
fn main() {
#[benchmarks]
mod benchmarks {
	use super::*;

	#[benchmark]
	fn do_something() {
        // Pre-benchmarking setup
        // ...

        // execute benchmark
		#[extrinsic_call]
		_(RawOrigin::Signed(caller), data);

        // Post execution verifications
        // ...
	}
}

Expose Runtime APIs for Key Functionalities

Severity: Low

Description

Runtime APIs in Substrate are a vital mechanism for exposing key runtime functionalities to external clients, such as frontend applications, wallets, or other off-chain systems. These APIs provide a way for external parties to interact with the blockchain, query on-chain data, and invoke runtime logic remotely, all while maintaining the security and integrity of the blockchain.

Failing to expose essential functions via Runtime APIs limits access to valuable runtime data, preventing users and clients from retrieving crucial on-chain information or interacting with the blockchain efficiently. By exposing necessary runtime functionalities, developers can enable richer, more interactive decentralized applications (dApps), fostering a more connected ecosystem.

What should be avoided

Avoid limiting key functionalities to internal runtime use only, as shown below:

#![allow(unused)]
fn main() {
// Functionality only accessible inside the pallet.
pub fn pot_account() -> T::AccountId {
	T::PotId::get().into_account_truncating()
}
}

In this example:

  • Although a function is available to retrieve the account ID of an internal pot, it is only accessible within the runtime. This limitation prevents clients or users from querying the account balance or initiating transfers to this account, as there is no way to know which account this is.

Best practice

Expose necessary runtime functionalities by implementing Runtime APIs. This approach allows external users or clients to access useful information as needed.

#![allow(unused)]
fn main() {
// pallet/lib.rs
sp_api::decl_runtime_apis! {
	/// This runtime api allows to query the pot account.
	pub trait PalletApi<AccountId>
	where
		AccountId: Codec,
	{
		/// Queries the pot account.
		fn pot_account() -> AccountId;
	}
}

// runtime/lib.rs
impl pallet::PalletApi<Block, AccountId> for Runtime {
	pub fn pot_account() -> AccountId {
		Pallet::pot_account()
	}
}
}

Remove Unused Code

Severity: Low

Description

Unused code can create unnecessary clutter, making the code harder to read, maintain, and optimize. As the codebase grows, leaving unused functions, variables, or comments in place leads to confusion and inefficiency. By removing or commenting out redundant code, developers can enhance the clarity of the codebase, making it easier for both current and future team members to understand and work with the system.

What should be avoided

Avoid leaving behind unused functions, variables, or code comments that are no longer necessary. These elements can lead to confusion and make the code harder to follow, especially as projects grow in size. Consider the example below:

#![allow(unused)]
fn main() {
// Example of unused code in a function

// fn process_important_data() {
//    // ...
//    // some logic that is no longer needed
//    // ...
// } 

fn process_data(data: &Vec<u32>) -> Result<u32, Error> {
    // Unused variable that adds no value
    // let important_variable = data[0];

    // Unused function that is commented out
    // process_important_data();

    // Only this line is necessary
    let important_variable: u32 = compute_data();
    // ...
    
    Ok(important_variable)
}
}

In this example:

  • The process_important_data function is commented out and no longer needed but has not been removed.
  • The important_variable initialized with data[0] is also commented out and unnecessary.

Best practice

To keep the codebase clean and efficient, remove unused or redundant elements like unused functions and variables. The streamlined version of the example above is more readable and maintainable:

#![allow(unused)]
fn main() {
fn process_data(data: &Vec<u32>) {
    let important_variable = 1 * 2; // Essential calculation used in function
    ...
    Ok(())
}
}

Informational Issues

Description

Informational issues in the context of Polkadot SDK are observations or recommendations that do not directly impact the functionality, security, or performance of the runtime. These issues are usually related to best practices, such as naming conventions, documentation standards, or suggestions to enhance clarity for developers. While not urgent, addressing these informational issues helps improve the overall structure, maintainability, and readability of the codebase, making it more accessible for current and future contributors.

Implications

Though informational issues do not have immediate technical consequences, resolving them contributes to a cleaner, more organized codebase. By following these recommendations, developers can ensure better documentation, promote adherence to best practices, and facilitate collaboration within the ecosystem, ultimately improving the quality and sustainability of the project over time.

Maintain Consistent Documentation Standards

Severity: Informational

Description

Inconsistent documentation standards can lead to confusion and hinder understanding, making it more challenging for developers—especially new contributors—to work with the codebase. Proper documentation ensures that each function, module, and component is well-understood and easily navigable. By adhering to a consistent documentation standard, developers ensure clarity, reduce the likelihood of errors, and make the code more maintainable, fostering better collaboration and smoother onboarding for new team members. To understand best practices in documentation, it is helpful to refer to how FRAME pallets are developed, as they follow a robust and standardized approach to both coding and documenting, ensuring consistency across the entire Substrate ecosystem. Taking a look at FRAME pallets can provide insight into how comprehensive documentation improves maintainability and collaboration in larger projects.

What should be avoided

Lack of documentation or minimal comments can leave important details unclear, as shown below:

#![allow(unused)]
fn main() {
fn process_data(data: u32) -> u32 {
  // Function logic here
  // ...
}
}

In this example:

  • The function lacks a description of its purpose, parameters, and potential side effects, making it difficult for others to understand, use or modify it confidently.

Best practice

Establish a consistent documentation standard that includes detailed descriptions, parameter explanations, and expected outcomes for each function.

#![allow(unused)]
fn main() {
/// Processes the provided data by performing necessary calculations.
///
/// # Parameters
/// - `data`: The input data to be processed, expected as an unsigned integer.
///
/// # Returns
/// - Result of the processing operation as a `u32`.
fn process_data(data: u32) -> u32 {
    // Function logic here
    // ...
}
}

With a standardized documentation format, each function is clearly explained, making the codebase easier to understand and maintain. This approach reduces the risk of misinterpretation and supports team collaboration.

Avoid Typographical Errors

Severity: Informational

Description

Typographical errors, while often overlooked, can significantly affect the clarity and reliability of a codebase. In a decentralized environment where precision is critical, such errors can lead to confusion, incorrect assumptions, and even subtle bugs. Whether in variable names, comments, or documentation, ensuring accuracy in spelling helps maintain clear communication within the team and with external contributors, reducing the likelihood of misunderstandings and improving the overall quality of the project.

What should be avoided

Typographical errors can make code less readable and may even lead to bugs if used inconsistently:

#![allow(unused)]
fn main() {
// Typo in variable name.
let amout_valu = 100;
}

In this example:

  • The misspelled variable amout_valu is unclear and could lead to confusion, especially if referenced in multiple parts of the code.

Best practice

Perform thorough proofreading to catch typos and enhance clarity, ensuring variable names and comments are accurate and descriptive.

#![allow(unused)]
fn main() {
// Correctly spelled variable name.
let amount_value = 100;
}

Using clear and correctly spelled names improves readability, maintains professionalism, and helps prevent misunderstandings within the codebase.

Make Backend Logic Frontend-Agnostic

Severity: Informational

Description

Backend logic should be independent of frontend-specific details, such as display formats, localization preferences, or user interface requirements, to ensure flexibility and consistency across different interfaces. By decoupling the backend from frontend concerns, developers can create a more maintainable and scalable system that can adapt to various frontend implementations without needing significant backend changes. This approach allows the backend to provide raw, unformatted data that can be tailored by the frontend to meet specific needs, promoting cleaner, more reusable code.

What should be avoided

The following example ties backend logic to frontend-specific display preferences, which can cause inconsistencies and make the backend harder to adapt:

#![allow(unused)]
fn main() {
fn display_value() -> &str {
    let value = get_value();

    // Formats value with a frontend-specific currency display
    format!("${:.2}", value)
}
}

In this example:

  • The function formats value in a currency-specific way, which may not be consistent with other frontends or localization requirements.

Best practice

Keep backend functions agnostic to frontend requirements. Instead, return a raw value that can be formatted by the frontend as needed:

#![allow(unused)]
fn main() {
fn display_value_generic() -> u32 {
    let value = get_value();

    // Returns raw value without formatting
    value
}
}

In this approach:

  • The backend returns a generic data type (e.g., u32), allowing frontend to format or display the value according to their own requirements.
  • This separation keeps backend code adaptable and frontend-agnostic, making it easier to support diverse interfaces and localization needs.

Use Proper Naming Criteria

Severity: Informational

Description

When naming terms related to assets, functions, or variables in Substrate development, it is crucial to avoid using well-known nomenclature that may already be strongly associated with other frameworks or ecosystems, such as Asset Hub. Misusing common terms like "foreign" or "native" can lead to confusion and misunderstandings, especially when the terminology overlaps with terms already established in other projects. It is recommended to conduct preliminary research into terms widely used by popular chains or modules to ensure clarity and consistency, preventing potential integration issues and improving the accuracy of documentation and code.

What should be avoided

In this example, assets bridged from another consensus system are referred to as "foreign", a term strongly associated with Asset Hub’s foreign assets. Given the chain’s intended integration with Asset Hub, this terminology could mislead developers, potentially causing fundamental misunderstandings about the asset types referenced in the documentation.

#![allow(unused)]
fn main() {
fn transfer_foreign(receiver: T::AccountId, balance: u32) -> Result<(), Error> {
    // The name of this variable is not appropiate.
    let foreign_asset_balance: BalanceOfAsset<T> = balance.into();
    
    let pot_account: T::AccountId = Self::account_id();
    pallet_assets::Pallet::<T>::transfer_keep_alive(
        RawOrigin::Signed(pot_account.clone()).into(),
        id.clone(),
        receiver,
        foreign_asset_balance,
    )?;
    
    Ok(())
}
}

Best practice

After conducting initial ecosystem research, particularly on chains with which the product is designed to integrate, a more informed naming decision can be made.

#![allow(unused)]
fn main() {
let bridged_asset_balance: BalanceOfAsset<T> = balance.into();
}

In this approach:

  • The terminology chosen clarifies the origin and nature of the assets.
  • Ambiguous terms like "native", "foreign", or "wrapped", commonly used in the context of pallet-assets, were avoided to prevent misunderstanding.