The solana cli includes
set configuration commands to automatically
--url argument for cli commands. For example:
While this section demonstrates how to connect to the Devnet cluster, the steps are similar for the other Solana Clusters.
Before attaching a validator node, sanity check that the cluster is accessible to your machine by fetching the transaction count:
View the metrics dashboard for more detail on cluster activity.
Try running following command to join the gossip network and view all the other nodes in the cluster:
If your machine has a GPU with CUDA installed (Linux-only currently), include
--cuda argument to
When your validator is started look for the following log message to indicate
that CUDA is enabled:
"[<timestamp> solana::validator] CUDA is enabled"
The solana repo includes a daemon to adjust system settings to optimize performance (namely by increasing the OS UDP buffer and file mapping limits).
The daemon (
solana-sys-tuner) is included in the solana binary release. Restart
it, before restarting your validator, after each software upgrade to ensure that
the latest recommended settings are applied.
To run it:
If you would prefer to manage system settings on your own, you may do so with the following commands.
[Service] section of your systemd service file, if you use one,
[Manager] section of
Create an identity keypair for your validator by running:
The identity public key can now be viewed by running:
Note: The "validator-keypair.json” file is also your (ed25519) private key.
You can create a paper wallet for your identity file instead of writing the keypair file to disk with:
The corresponding identity public key can now be viewed by running:
and then entering your seed phrase.
See Paper Wallet Usage for more info.
You can generate a custom vanity keypair using solana-keygen. For instance:
You may request that the generated vanity keypair be expressed as a seed phrase which allows recovery of the keypair from the seed phrase and an optionally supplied passphrase (note that this is significantly slower than grinding without a mnemonic):
Depending on the string requested, it may take days to find a match...
Your validator identity keypair uniquely identifies your validator within the network. It is crucial to back-up this information.
If you don’t back up this information, you WILL NOT BE ABLE TO RECOVER YOUR VALIDATOR if you lose access to it. If this happens, YOU WILL LOSE YOUR ALLOCATION OF SOL TOO.
To back-up your validator identify keypair, back-up your "validator-keypair.json” file or your seed phrase to a secure location.
Now that you have a keypair, set the solana configuration to use your validator keypair for all following commands:
You should see the following output:
Airdrop yourself some SOL to get started:
Note that airdrops are only available on Devnet and Testnet. Both are limited to 1 SOL per request.
To view your current balance:
Or to see in finer detail:
Read more about the difference between SOL and lamports here.
If you haven’t already done so, create a vote-account keypair and create the vote account on the network. If you have completed this step, you should see the “vote-account-keypair.json” in your Solana runtime directory:
The following command can be used to create your vote account on the blockchain with all the default options:
Read more about creating and managing a vote account.
If you know and trust other validator nodes, you can specify this on the command line with the
solana-validator. You can specify multiple ones by repeating the argument
--trusted-validator <PUBKEY1> --trusted-validator <PUBKEY2>.
This has two effects, one is when the validator is booting with
--no-untrusted-rpc, it will only ask that set of
trusted nodes for downloading genesis and snapshot data. Another is that in combination with the
it will monitor the merkle root hash of the entire accounts state of other trusted nodes on gossip and if the hashes produce any mismatch,
the validator will halt the node to prevent the validator from voting or processing potentially incorrect state values. At the moment, the slot that
the validator publishes the hash on is tied to the snapshot interval. For the feature to be effective, all validators in the trusted
set should be set to the same snapshot interval value or multiples of the same.
It is highly recommended you use these options to prevent malicious snapshot state download or account state divergence.
Connect to the cluster by running:
To force validator logging to the console add a
--log - argument, otherwise
the validator will automatically log to a file.
The ledger will be placed in the
ledger/ directory by default, use the
--ledger argument to specify a different location.
Note: You can use a paper wallet seed phrase for your
--authorized-voterkeypairs. To use these, pass the respective argument as
solana-validator --identity ASK ... --authorized-voter ASK ...and you will be prompted to enter your seed phrases and optional passphrase.
Confirm your validator connected to the network by opening a new terminal and running:
If your validator is connected, its public key and IP address will appear in the list.
By default the validator will dynamically select available network ports in the
8000-10000 range, and may be overridden with
solana-validator --dynamic-port-range 11000-11010 ... will restrict
the validator to ports 11000-11010.
--limit-ledger-size parameter allows you to specify how many ledger
shreds your node retains on disk. If you do not
include this parameter, the validator will keep the entire ledger until it runs
out of disk space.
The default value attempts to keep the ledger disk usage under 500GB. More or
less disk usage may be requested by adding an argument to
if desired. Check
solana-validator --help for the default limit value used by
--limit-ledger-size. More information about
selecting a custom limit value is available
Running the validator as a systemd unit is one easy way to manage running in the background.
Assuming you have a user called
sol on your machine, create the file
/home/sol/bin/validator.sh to include the desired
solana-validator command-line. Ensure that the 'exec' command is used to
start the validator process (i.e. "exec solana-validator ..."). This is
important because without it, logrotate will end up killing the validator
every time the logs are rotated.
Ensure that running
/home/sol/bin/validator.sh manually starts
the validator as expected. Don't forget to mark it executable with
chmod +x /home/sol/bin/validator.sh
Start the service with:
The messages that a validator emits to the log can be controlled by the
environment variable. Details can by found in the documentation
env_logger Rust crate.
Note that if logging output is reduced, this may make it difficult to debug issues encountered later. Should support be sought from the team, any changes will need to be reverted and the issue reproduced before help can be provided.
The validator log file, as specified by
--log ~/solana-validator.log, can get
very large over time and it's recommended that log rotation be configured.
The validator will re-open its when it receives the
USR1 signal, which is the
basic primitive that enables log rotation.
If the validator is being started by a wrapper shell script, it is important to
launch the process with
exec solana-validator ...) when using logrotate.
This will prevent the
USR1 signal from being sent to the script's process
instead of the validator's, which will kill them both.
An example setup for the
logrotate, which assumes that the validator is
running as a systemd service called
sol.service and writes a log file at
As mentioned earlier, be sure that if you use logrotate, any script you create which starts the solana validator process uses "exec" to do so (example: "exec solana-validator ..."); otherwise, when logrotate sends its signal to the validator, the enclosing script will die and take the validator process with it.
Once your validator is operating normally, you can reduce the time it takes to
restart your validator by adding the
--no-port-check flag to your
If you are not serving snapshots to other validators, snapshot compression can be disabled to reduce CPU load at the expense of slightly more disk usage for local snapshot storage.
--snapshot-compression none argument to your
command-line arguments and restart the validator.
If your machine has plenty of RAM, a tmpfs ramdisk (tmpfs) may be used to hold the accounts database
When using tmpfs it's essential to also configure swap on your machine as well to avoid running out of tmpfs space periodically.
A 300GB tmpfs partition is recommended, with an accompanying 250GB swap partition.
sudo mkdir /mnt/solana-accounts
- Add a 300GB tmpfs parition by adding a new line containing
tmpfs /mnt/solana-accounts tmpfs rw,size=300G,user=sol 0 0to
/etc/fstab(assuming your validator is running under the user "sol"). CAREFUL: If you incorrectly edit /etc/fstab your machine may no longer boot
- Create at least 250GB of swap space
- Choose a device to use in place of
SWAPDEVfor the remainder of these instructions. Ideally select a free disk partition of 250GB or greater on a fast disk. If one is not available, create a swap file with
sudo dd if=/dev/zero of=/swapfile bs=1MiB count=250KiB, set its permissions with
sudo chmod 0600 /swapfileand use
SWAPDEVfor the remainder of these instructions
- Format the device for usage as swap with
sudo mkswap SWAPDEV
- Add the swap file to
/etc/fstabwith a new line containing
SWAPDEV swap swap defaults 0 0
- Enable swap with
sudo swapon -aand mount the tmpfs with
sudo mount /mnt/solana-accounts/
- Confirm swap is active with
free -gand the tmpfs is mounted with
Now add the
--accounts /mnt/solana-accounts argument to your
command-line arguments and restart the validator.
As the number of populated accounts on the cluster grows, account-data RPC
requests that scan the entire account set -- like
SPL-token-specific requests --
may perform poorly. If your validator needs to support any of these requests,
you can use the
--account-index parameter to activate one or more in-memory
account indexes that significantly improve RPC performance by indexing accounts
by the key field. Currently supports the following parameter values:
program-id: each account indexed by its owning program; used by
spl-token-mint: each SPL token account indexed by its token Mint; used by getTokenAccountsByDelegate, and getTokenLargestAccounts
spl-token-owner: each SPL token account indexed by the token-owner address; used by getTokenAccountsByOwner, and
getProgramAccountsrequests that include an spl-token-owner filter.