SUNCAN Native PCIe CAN FD The Fastest Path to Efficiency Application Note

1. Introduction

Given the rapidly growing demand for CAN FD, the market already offers a variety of CAN FD PCIe add-on cards. Although the CAN FD specification claims a maximum data rate of up to 8 Mbps, the actual performance of these add-on cards varies significantly among different brands. This is mainly due to differences in both software and hardware architectures. In this article, setting aside software-related factors, we will explain why the SUNCAN hardware architecture—Native PCIe CAN FD—is the most efficient design.

2. Two Typical Architectures Found in the Market

1) MCU Based PCIe CAN-FD

The system architecture is illustrated in the figure below. Data originating from the host is first relayed through the PCIe-to-USB host controller to the MCU. The MCU subsequently encapsulates the raw data into a CAN FD frame, which is then transmitted to the CAN FD controller.

2) PCIe CAN-FD with External Controller

The system architecture is illustrated in the figure below. Data originating from the host is first processed by the PCIe controller and is subsequently transmitted via SPI to the external CAN FD controller.

3. SUNCAN – Native PCIe CAN FD

The SUNIX SUNCAN adopts an approach similar to Architecture 2; however, the proprietary SUNIX CAN FD controller eliminates the additional SPI layer. Data from the host, once processed by the PCIe controller, is directly delivered to the SUNIX CAN FD controller, thereby significantly reducing latency. The corresponding system architecture is shown below.

4. Conclusion

From the three architecture diagrams in the previous sections, it can be seen that SUNCAN’s design significantly reduces the latency of frame packaging and shortens the data path, making it one of the most efficient designs. This is even more evident from the summary in the table below.

Native PCIe CAN-FD
End-to-End Latency: Lowest (depending on driver and interrupts)
CPU/Driver Overhead: Lowest if implemented with DMA/MMIO/ring buffers and batch processing.
Common Bottlenecks: Limited by CAN FD line rate, host interrupt handling, and driver loop inefficiencies.
PCIe CAN-FD with External Controller
End-to-End Latency: Medium (depending on SPI clock and transaction batching)
CPU/Driver Overhead: Each SPI transaction has command/register overhead; driver needs to manage many transactions if traffic is heavy.
Common Bottlenecks: SPI clock + transaction overhead + CS toggles + single-master arbitration.
MCU Based PCIe CAN-FD
End-to-End Latency: Highest (due to USB host scheduling and MCU stack processing)
CPU/Driver Overhead: MCU handles CAN framing but host still needs buffer management.
Common Bottlenecks: USB HS throughput fine, but MCU parses USB packets, reassembles CAN frames, and manages peripherals, adding load and frame latency.

Evidence – Throughput and CPU Loading

Architecture Throughput CPU Loading
Native PCIe CAN FD 1.17 Mbps 28%
PCIe CAN FD with External Controller 680 Kbps 85%
MCU Based PCIe CAN FD 410 Kbps 79%

Bitrate: 1 Mbps | Software: SocketCAN | Transmission mode: two CAN channels in peer-to-peer transmission | Command: cangen –g 0 –L 8


SUNCAN Native PCIe CAN FD The Fastest Path to Efficiency Application Note

1. Introduction

Given the rapidly growing demand for CAN FD, the market already offers a variety of CAN FD PCIe add-on cards. Although the CAN FD specification claims a maximum data rate of up to 8 Mbps, the actual performance of these add-on cards varies significantly among different brands. This is mainly due to differences in both software and hardware architectures. In this article, setting aside software-related factors, we will explain why the SUNCAN hardware architecture—Native PCIe CAN FD—is the most efficient design.

2. Two Typical Architectures Found in the Market

1) MCU Based PCIe CAN-FD

The system architecture is illustrated in the figure below. Data originating from the host is first relayed through the PCIe-to-USB host controller to the MCU. The MCU subsequently encapsulates the raw data into a CAN FD frame, which is then transmitted to the CAN FD controller.

2) PCIe CAN-FD with External Controller

The system architecture is illustrated in the figure below. Data originating from the host is first processed by the PCIe controller and is subsequently transmitted via SPI to the external CAN FD controller.

3. SUNCAN – Native PCIe CAN FD

The SUNIX SUNCAN adopts an approach similar to Architecture 2; however, the proprietary SUNIX CAN FD controller eliminates the additional SPI layer. Data from the host, once processed by the PCIe controller, is directly delivered to the SUNIX CAN FD controller, thereby significantly reducing latency. The corresponding system architecture is shown below.

4. Conclusion

From the three architecture diagrams in the previous sections, it can be seen that SUNCAN’s design significantly reduces the latency of frame packaging and shortens the data path, making it one of the most efficient designs. This is even more evident from the summary in the table below.

Native PCIe CAN-FD
End-to-End Latency: Lowest (depending on driver and interrupts)
CPU/Driver Overhead: Lowest if implemented with DMA/MMIO/ring buffers and batch processing.
Common Bottlenecks: Limited by CAN FD line rate, host interrupt handling, and driver loop inefficiencies.
PCIe CAN-FD with External Controller
End-to-End Latency: Medium (depending on SPI clock and transaction batching)
CPU/Driver Overhead: Each SPI transaction has command/register overhead; driver needs to manage many transactions if traffic is heavy.
Common Bottlenecks: SPI clock + transaction overhead + CS toggles + single-master arbitration.
MCU Based PCIe CAN-FD
End-to-End Latency: Highest (due to USB host scheduling and MCU stack processing)
CPU/Driver Overhead: MCU handles CAN framing but host still needs buffer management.
Common Bottlenecks: USB HS throughput fine, but MCU parses USB packets, reassembles CAN frames, and manages peripherals, adding load and frame latency.

Evidence – Throughput and CPU Loading

Architecture Throughput CPU Loading
Native PCIe CAN FD 1.17 Mbps 28%
PCIe CAN FD with External Controller 680 Kbps 85%
MCU Based PCIe CAN FD 410 Kbps 79%

Bitrate: 1 Mbps | Software: SocketCAN | Transmission mode: two CAN channels in peer-to-peer transmission | Command: cangen –g 0 –L 8


SUNCAN Native PCIe CAN FD The Fastest Path to Efficiency Application Note

1. Introduction

Given the rapidly growing demand for CAN FD, the market already offers a variety of CAN FD PCIe add-on cards. Although the CAN FD specification claims a maximum data rate of up to 8 Mbps, the actual performance of these add-on cards varies significantly among different brands. This is mainly due to differences in both software and hardware architectures. In this article, setting aside software-related factors, we will explain why the SUNCAN hardware architecture—Native PCIe CAN FD—is the most efficient design.

2. Two Typical Architectures Found in the Market

1) MCU Based PCIe CAN-FD

The system architecture is illustrated in the figure below. Data originating from the host is first relayed through the PCIe-to-USB host controller to the MCU. The MCU subsequently encapsulates the raw data into a CAN FD frame, which is then transmitted to the CAN FD controller.

2) PCIe CAN-FD with External Controller

The system architecture is illustrated in the figure below. Data originating from the host is first processed by the PCIe controller and is subsequently transmitted via SPI to the external CAN FD controller.

3. SUNCAN – Native PCIe CAN FD

The SUNIX SUNCAN adopts an approach similar to Architecture 2; however, the proprietary SUNIX CAN FD controller eliminates the additional SPI layer. Data from the host, once processed by the PCIe controller, is directly delivered to the SUNIX CAN FD controller, thereby significantly reducing latency. The corresponding system architecture is shown below.

4. Conclusion

From the three architecture diagrams in the previous sections, it can be seen that SUNCAN’s design significantly reduces the latency of frame packaging and shortens the data path, making it one of the most efficient designs. This is even more evident from the summary in the table below.

Native PCIe CAN-FD
End-to-End Latency: Lowest (depending on driver and interrupts)
CPU/Driver Overhead: Lowest if implemented with DMA/MMIO/ring buffers and batch processing.
Common Bottlenecks: Limited by CAN FD line rate, host interrupt handling, and driver loop inefficiencies.
PCIe CAN-FD with External Controller
End-to-End Latency: Medium (depending on SPI clock and transaction batching)
CPU/Driver Overhead: Each SPI transaction has command/register overhead; driver needs to manage many transactions if traffic is heavy.
Common Bottlenecks: SPI clock + transaction overhead + CS toggles + single-master arbitration.
MCU Based PCIe CAN-FD
End-to-End Latency: Highest (due to USB host scheduling and MCU stack processing)
CPU/Driver Overhead: MCU handles CAN framing but host still needs buffer management.
Common Bottlenecks: USB HS throughput fine, but MCU parses USB packets, reassembles CAN frames, and manages peripherals, adding load and frame latency.

Evidence – Throughput and CPU Loading

Architecture Throughput CPU Loading
Native PCIe CAN FD 1.17 Mbps 28%
PCIe CAN FD with External Controller 680 Kbps 85%
MCU Based PCIe CAN FD 410 Kbps 79%

Bitrate: 1 Mbps | Software: SocketCAN | Transmission mode: two CAN channels in peer-to-peer transmission | Command: cangen –g 0 –L 8


SUNCAN Native PCIe CAN FD The Fastest Path to Efficiency Application Note

1. Introduction

Given the rapidly growing demand for CAN FD, the market already offers a variety of CAN FD PCIe add-on cards. Although the CAN FD specification claims a maximum data rate of up to 8 Mbps, the actual performance of these add-on cards varies significantly among different brands. This is mainly due to differences in both software and hardware architectures. In this article, setting aside software-related factors, we will explain why the SUNCAN hardware architecture—Native PCIe CAN FD—is the most efficient design.

2. Two Typical Architectures Found in the Market

1) MCU Based PCIe CAN-FD

The system architecture is illustrated in the figure below. Data originating from the host is first relayed through the PCIe-to-USB host controller to the MCU. The MCU subsequently encapsulates the raw data into a CAN FD frame, which is then transmitted to the CAN FD controller.

2) PCIe CAN-FD with External Controller

The system architecture is illustrated in the figure below. Data originating from the host is first processed by the PCIe controller and is subsequently transmitted via SPI to the external CAN FD controller.

3. SUNCAN – Native PCIe CAN FD

The SUNIX SUNCAN adopts an approach similar to Architecture 2; however, the proprietary SUNIX CAN FD controller eliminates the additional SPI layer. Data from the host, once processed by the PCIe controller, is directly delivered to the SUNIX CAN FD controller, thereby significantly reducing latency. The corresponding system architecture is shown below.

4. Conclusion

From the three architecture diagrams in the previous sections, it can be seen that SUNCAN’s design significantly reduces the latency of frame packaging and shortens the data path, making it one of the most efficient designs. This is even more evident from the summary in the table below.

Native PCIe CAN-FD
End-to-End Latency: Lowest (depending on driver and interrupts)
CPU/Driver Overhead: Lowest if implemented with DMA/MMIO/ring buffers and batch processing.
Common Bottlenecks: Limited by CAN FD line rate, host interrupt handling, and driver loop inefficiencies.
PCIe CAN-FD with External Controller
End-to-End Latency: Medium (depending on SPI clock and transaction batching)
CPU/Driver Overhead: Each SPI transaction has command/register overhead; driver needs to manage many transactions if traffic is heavy.
Common Bottlenecks: SPI clock + transaction overhead + CS toggles + single-master arbitration.
MCU Based PCIe CAN-FD
End-to-End Latency: Highest (due to USB host scheduling and MCU stack processing)
CPU/Driver Overhead: MCU handles CAN framing but host still needs buffer management.
Common Bottlenecks: USB HS throughput fine, but MCU parses USB packets, reassembles CAN frames, and manages peripherals, adding load and frame latency.

Evidence – Throughput and CPU Loading

Architecture Throughput CPU Loading
Native PCIe CAN FD 1.17 Mbps 28%
PCIe CAN FD with External Controller 680 Kbps 85%
MCU Based PCIe CAN FD 410 Kbps 79%

Bitrate: 1 Mbps | Software: SocketCAN | Transmission mode: two CAN channels in peer-to-peer transmission | Command: cangen –g 0 –L 8


Back to top
返回頂部
返回頂部