# FAQ Optimism (OP)

<details>

<summary><strong>About OP</strong></summary>

### **Optimism**

Optimism is a Layer 2 scaling solution for Ethereum that leverages rollup technology to bundle multiple transactions into batches before submitting them to Ethereum L1.

#### **Core Components Overview:**

* **Sequencer:** Orders and processes transactions before batching.
* **Proposer:** Posts state roots and finalization data to L1.
* **Batcher:** Sends compressed data batches to L1 for data availability.
* **RPC Nodes:** Support horizontal scaling for high RPC traffic.

</details>

<details>

<summary><strong>Setting Up an OP Rollup</strong></summary>

#### **Configuration Parameters Explained**

* **`batcher_max_channel_duration`:** Controls the maximum duration a batch can stay open before being submitted.
  * **Definition**: Specifies the maximum duration (in L1 blocks) that a batcher can keep a channel open before submitting the batched transactions to L1.
  * **Purpose**: Ensures timely submission of transactions, balancing between cost efficiency and data availability.
  * **Recommendation**: Set this value considering the L1 block time (e.g., for Ethereum, \~12 seconds per block) and the desired frequency of batch submissions.
* **`proposer_proposal_interval`:** Time interval for submitting state roots to L1.
  * **Definition**: Determines the time interval (in seconds) at which the proposer submits state roots to L1.
  * **Purpose**: Regular submissions help in maintaining the integrity of the rollup and facilitate L2 to L1 communication.
  * **Recommendation**: Align this interval with `batcher_max_channel_duration` to ensure synchronized operations and avoid redundant postings.
* **`data_availability_type`:** Specifies how the rollup handles data availability (either `blobs` or `calldata`).
  * **Definition**: Indicates the method used for data availability in the rollup, either `blobs` or `calldata`.
  * **Purpose**: Determines how transaction data is stored and retrieved, impacting scalability and security.
  * **Recommendation**: Choose `blobs` for blob storage or `calldata` for embedding data directly in transactions, based on your application's requirements.

</details>

<details>

<summary><strong>Bridging and Token Transfers</strong></summary>

Bridging tokens between Layer 1 (Ethereum) and Layer 2 (Optimism) is essential for enabling asset movement across the two layers. The process involves both deposits (moving assets from L1 to L2) and withdrawals (returning assets from L2 to L1).

![](https://1439499438-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fti15D1ubA4vJmpJV8XEc%2Fuploads%2FRPHSufSg9FLu6Ny3iK19%2Fop-bridging-flowchart.jpg?alt=media\&token=cec39bd4-be5d-4b9b-b966-173765af5b65)

#### Deposit Flow (L1 to L2):

1. **User Initiation**: The user sends tokens to the L1 Standard Bridge contract.
2. **Message Relaying**: The L1 CrossDomain Messenger handles message transmission to L2.
3. **L2 Credit**: Tokens are minted or credited to the user’s L2 account upon message finalization.

#### Withdrawal Flow:

1. **User Initiation**: The user initiates a withdrawal on L2.
2. **Message Relaying**: The L2 CrossDomain Messenger processes the request.
3. **L1 Finalization**: Tokens are released to the user's L1 account after the challenge period.

#### Bridging Tokens Between L1 and L2

* Tokens can be bridged using the `OptimismPortalProxy.depositERC20Transaction` method.
* Ensure token approval before deposit.
* Depositing and Withdrawing Tokens.

</details>

<details>

<summary><strong>Configuration (Set Before Deployment)</strong></summary>

⚠ **Important:** The following parameters must be configured before rollup deployment and **cannot be changed after deployment.**

* **Batching Parameters**
  * **Max Number of Batches:** Default is `300`.
  * **Max Batch Size:** Set to `120,000 bytes`.
  * **Max Batches Per Sequence:** Default is `300` but can be increased.
* **Setting batcher\_max\_channel\_duration**
  * Ensure both `batcher_max_channel_duration` and `proposer_proposal_interval` are synchronized to prevent excessive postings to L1.
* **Sequencer Rate Limits and Batch Size**
  * **Sequencer Rate Limit:** Default rate settings can be modified based on traffic needs.
  * **Batch Size:** The batch size can be adjusted for both L1 and L2 throughput optimization.

#### **How to Decide Pre-Deployment Settings?**

#### **Recommendation**

* For low-latency rollups (high-speed apps) → Lower values for `batcher_max_channel_duration` and `proposer_proposal_interval`.
* For cost-efficient rollups (large batch processing) → Higher values for these parameters to batch more transactions per submission.

</details>

## Troubleshooting

<details>

<summary><strong>Faucet Not Sending Tokens</strong></summary>

Users have reported intermittent errors when attempting to receive tokens from the Optimism faucet, especially during periods of high demand. These errors often occur due to:

* **Server Load:** High traffic can temporarily overwhelm the faucet service.
* **Insufficient Funds:** The faucet may be out of funds for distribution.
* **Rate Limits:** Users might hit daily or IP-based rate limits.

#### **Suggested Solutions:**

* Wait a few hours and retry the claim process.
* Ensure the correct wallet network (L2) is selected.
* Try alternative testnet faucets if available.

[**Source: GitHub Optimism Docs Issue**](https://github.com/ethereum-optimism/docs/issues/1030?utm_source=chatgpt.com)

</details>

<details>

<summary><strong>Max Number of Batches Per Sequence</strong></summary>

The **batcher** in the OP Stack is responsible for submitting L2 transaction data to L1 in a compressed format. To optimize performance and reduce costs, the `batcher_max_channel_duration` parameter controls how long a batch remains open before submission.

#### **Key Settings:**

* **Default:** `0` (disables batch duration tracking).
* **Recommended:** Set to target **5 hours** of batching, which corresponds to approximately **1500 L1 blocks** (assuming a 12-second block time on Sepolia).

#### **Best Practices:**

* Adjust the batch size and interval based on network traffic patterns for cost optimization.

[**Source: Optimism Batcher Configuration**](https://docs.optimism.io/builders/chain-operators/configuration/batcher?utm_source=chatgpt.com)

</details>

<details>

<summary><strong>Resources</strong></summary>

* [Optimism Official Documentation](https://docs.optimism.io/)
* [Optimism Stack Specification](https://specs.optimism.io/)
* [Smart Contract Repositories](https://github.com/ethereum-optimism)
* [OP Stack Configuration Tools](https://docs.optimism.io/builders/chain-operators/management/configuration)

</details>
