Software Development Checklist
This checklist is prepared to verify reliability, maintainability, and hardware compatibility of software in embedded systems. Each item includes key points that should be reviewed in terms of code quality, error detection, performance measurement, and power management.
Version Management and Documentation
1. Are all software versions archived?
Each software version should be stored with build date, version number, and change description. This archiving is mandatory for both development tracking and field support. Version archives should include: source code, compiled binary (.hex / .bin / .elf), build scripts, and configuration files.
Archiving should be done automatically in version control systems such as Git, SVN, or Mercurial. A hash (commit ID) or semantic version (e.g., v1.2.3) should be generated by the system for each build. When necessary, old versions should be reloadable by matching with product serial number.
This item is a basic requirement of Configuration Management (CM) systems (see ISO 10007:2017).
2. Is revision history maintained for each change?
Revision history makes software evolution traceable. Each change should be technically explained and associated with the responsible developer. Information that should be in revision records: Date (day the change was made), Developer (responsible person or team), Description (which module was changed and why), Version number (version tag after change e.g., 1.0.5 → 1.0.6). Each change should be collected in "changelog.md" or "release_notes.txt" file. Developers should use work order (ticket ID) or bug code (bug ID) references in commit messages. For field firmware, version history should be visible to the customer.
This item corresponds to the change tracking requirement in IEEE 828 – Configuration Management in Systems and Software Engineering standard.
3. Are design notes available in code or separate documentation?
Software understandability is measured not only by code quality but by clearly documenting design intent. Design notes explain how and why critical functions were written this way. Function-level comments (docstring / comment block) should be included in code. Algorithm explanations should be kept in a separate "Software Design Note" document (e.g., SW_Design_Notes_RevB.pdf). Input/output parameters, processing time, and error conditions should be defined for each module. Flow charts or state machine (FSM) diagrams should be added to documentation. Critical software sections (e.g., ISR, RTOS task, DMA handler) should be explained with detailed comments.
This item meets the Design Description Documentation principle in DO-178C (Software Considerations in Airborne Systems) standard.
4. Are version numbers defined in data structures?
Software versions should be tracked not only at code level but also in data structures and communication protocols. This is mandatory to maintain backward compatibility in firmware updates. Each data packet or EEPROM/Flash structure should carry a structure version tag (e.g., struct_version = 0x02).
In firmware upgrades, old data should be automatically converted to new format. Major/minor versioning should be applied in protocol changes (e.g., v1.2 → v2.0). This information should be in "Software Interface Control Document (ICD)".
This item corresponds to the configuration item traceability requirement in ISO/IEC/IEEE 12207 – Software Lifecycle Processes standard.
Code Quality and Standards
5. Is coding style defined and consistently applied?
Coding style is the "common language" of software architecture. Having all developers in the team write code in the same format reduces maintenance time and lowers error rates. Coding style should be defined in writing at project start.
Example standards: MISRA-C (Motor Industry Software Reliability Association): For embedded safety systems, CERT-C: Secure C programming principles, Google C++ Style Guide: For modular and modern code structure, In-house guidelines (e.g., "Company_C_Style_v2.1.pdf"). Style checking should be verified with automatic tools during code review (e.g., clang-format, uncrustify). Style violations should be evaluated as quality thresholds, not as "warnings".
This item supports the "Consistency & Maintainability" criterion of ISO/IEC 5055 – Software Engineering: Software Quality Measurement standard.
6. Are variable, function, and file naming conventions consistent?
Code readability is directly related to consistency of naming conventions. Having names clear, meaningful, and fit for purpose both reduces error rates and facilitates new developers' adaptation to the process.
Naming rules should be defined in project guidelines (e.g., "Naming_Conventions.md"). Functions: should be action-based and meaningful (e.g., ReadTempSensor() or InitUART()). Variables: should contain clear context (uint16_t batteryVoltage_mV; preferred). Constants (define, enum): uppercase with underscores (MAX_BUFFER_SIZE). Files: reflect content (temp_sensor.c, adc_driver.h). Complex abbreviations (RTS1(), SMPX()) should only be used if explained in technical documentation.
This item corresponds to the "Understandability" sub-criterion of ISO/IEC 25010 – Software Product Quality Model.
7. Is automatic static analysis (LINT, cppcheck, etc.) performed on code?
Static analysis enables detection of potential errors, memory overflows, and security vulnerabilities before code execution. This process is the "first line of defense" before testing. Static analysis tools should be integrated into development process: LINT / PC-Lint (classic analysis tool for embedded C code), Cppcheck (C/C++ open-source static analysis), SonarQube (enterprise-level analysis and reporting infrastructure), Clang Static Analyzer or Coverity Scan alternatives. Analysis outputs should be evaluated in code review meetings. "Severity" levels should be determined: Critical / Major / Minor. Static analysis reports should be integrated into build pipeline and included in CI (Continuous Integration) process.
This check is compatible with ISO/IEC 27034 – Application Security and MISRA-C:2012 Rule 2.x requirements.
8. Do code format, spaces, indentation, and comments comply with a specific standard?
Providing aesthetic consistency to code is not just a visual preference; it's important for readability, error detection, and team productivity. Code formatting rules should be applied with automatic tools (clang-format, astyle).
Indentation: 4 spaces or fixed tab policy should be determined. Comments: Inline (//), Multi-line (/* ... */), Function descriptions in Doxygen format. Comment density in code (%comment density) should be between 20–25%. Automatic check: pre-commit hook or CI lint pipeline.
This item meets the Coding and Documentation Standards requirement of ISO/IEC 29110 – Software Engineering Lifecycle Profile for Small Enterprises.
Flow Controls and Loops
9. Do all loops have termination conditions?
Loops without termination conditions cause software hang or watchdog resets. This can render the entire system non-functional in real-time systems. Every for, while, do-while loop should contain an explicit termination condition. If loop conditions depend on external factors (e.g., sensor signal, flag variable), a timeout mechanism should be defined.
System behavior after timeout should be determined (e.g., "abort", "retry", "safe mode"). Watchdog reset should only be used for recovery after critical error, not as part of normal flow. These checks should be verified in static analysis (MISRA Rule 14.2) and unit test coverage stages.
This item meets IEC 61508-3:2010 Clause 7.4.3 – Software Control Structures requirement.
10. Are all branches tested?
The basic measure of software test coverage is branch coverage rate. All conditions, error paths, and exception cases should be tested. Both positive and negative case tests should be performed for each if-else, switch-case, try-catch structure.
Coverage target: Statement Coverage: ≥ 90%, Branch Coverage: ≥ 80%. Error cases (e.g., null pointer, invalid data, I/O timeout) should be verified with special test scenarios.
Test tools: Ceedling (CUnit), GoogleTest, Unity Framework, gcov / lcov (coverage analysis).
This item defines test coverage compliant with ISO/IEC/IEEE 29119-4 – Software Testing Techniques and DO-178C Table A-7 criteria.
11. Are timers and delay functions tested for accuracy?
Timing errors in real-time systems directly affect product stability. Accuracy of timer and delay functions should be tested depending on microcontroller clock frequency and interrupt latency conditions.
Each timer function (delay_ms, timeout, scheduler tick) should be verified with reference time source (e.g., oscilloscope, logic analyzer). Tolerance target: ±1% accuracy (e.g., 1 s delay → 0.99–1.01 s). In real-time operating systems (RTOS), task scheduling jitter measurements should be performed. Workload inside Timer ISR (interrupt service routine) should be analyzed; ISR duration should not affect task priority.
This item is compatible with ISO/IEC/IEEE 24765 – Real-Time Systems Vocabulary and AUTOSAR Timing Extensions principles.
12. Are FIFO and buffer overflows checked?
Buffer overflows are one of the most common and dangerous error types in embedded systems. Overflows can lead to data corruption, system crashes, or security vulnerabilities. Write and read boundary checks should be performed in each FIFO or buffer structure.
Array or memory accesses should be secured by checking array bounds. Modular boundary check ((index + 1) % size) should be applied for ring buffer structures. Static analysis (LINT, cppcheck) and AddressSanitizer / Valgrind tests should be performed during development.
Buffer should be cleared or overflow flag should be set in error cases.
This item is directly related to CERT-C Rule ARR30-C: Do Not Form or Use Out-of-Bounds Pointers and MISRA-C 18.1 rules.
13. Are critical timer driver codes tested?
Driver codes related to peripherals such as PWM, ADC, DAC, I²C, SPI form the core of system stability. Therefore, both functional and timing accuracy of these drivers should be tested. Driver tests should be prepared for each peripheral: PWM (Duty cycle accuracy, frequency stability), ADC (Sampling frequency, resolution and average error), DAC (Output stability, zero point offset), I²C/SPI (Communication synchronization, timeout management).
Driver functions should be verified in both isolated test (module test) and system test (integration test). Test results should be added to "Driver Verification Report" document.
This item meets validation requirements compatible with IEC 60730 – Automatic Electrical Controls for Household and Similar Use (Software Testing) standard.
Error Management and Shutdown Behaviors
14. Are power-up / power-down states handled?
If system behavior during power loss or unexpected shutdowns is not predefined, data loss or hardware damage can occur. Therefore, every product should have a controlled power-up and shutdown algorithm.
During power-up phase: Power line sequencing should be monitored by software, Initialization steps should run in a specific order (e.g., sensor → communication → application), If there's a write operation during EEPROM/Flash access, power stabilization should be awaited.
During power-down phase: Temporary data (config, log, counters) should be safely saved to NVM, Power availability should be verified before write operations are interrupted (e.g., power-fail detect pin), Critical operations should be completed in "atomic" manner (uninterruptible transaction block).
This item meets IEC 60730-1 Annex H – Software Power Cycle Integrity requirements.
15. Are warm and cold reset differences defined?
Improper management of different reset types can lead to instability in system startup conditions. Therefore, warm and cold reset scenarios should be explicitly distinguished by software.
Cold Reset: All memory areas are reset (stack, heap, global variables), Hardware peripherals are restarted, System starts with "factory default" configuration.
Warm Reset: RAM content (e.g., counter, mode state) is preserved, EEPROM/Flash is not restarted, User process continues without interruption. Reset cause should be read from hardware or software through "reset cause register" (e.g., RCC_CSR, MCUSR).
This item meets ISO 26262-6 Clause 9 – Software Initialization and IEC 61508-3 Clause 7.4.4 requirements.
16. Are unused interrupt vectors safely redirected?
Unused interrupts, if not handled correctly, can cause memory overflow or undefined behaviors. All empty interrupt vectors should be redirected to a single safe "trap" function. This function ensures system resets controllably in unexpected situations.
Addresses of empty vectors should be verified in ISR table (e.g., with linker map file check).
This item is compatible with ISO/IEC/IEEE 12207 – Software Lifecycle Processes (Maintenance and Fault Handling) standard.
17. Are unused ROM areas filled with "trap" or reset commands?
To prevent system from executing random commands when program branches to wrong address (jump to invalid address), empty ROM areas should be filled with NOP (No Operation) or Reset vector (0xFFFF / JMP 0x0000). This ensures safe system shutdown in memory corruption or pointer overflow situations.
Can be defined with "fill" directives in linker script (.ld file): FILL 0xFFFF. These areas should be verified with firmware security scan (binwalk, hexcmp).
This item is compatible with software integrity level (SIL 3–4) security criteria in DO-178C – Software Considerations in Airborne Systems standard.
18. Are non-volatile (persistent) memory corruptions checked?
Write operations in EEPROM or Flash memory can be interrupted halfway, especially during power loss. This situation can cause corrupt configuration data or system startup errors. CRC16 or CRC32 verification code should be used for each storage area. "Shadow copy" or "double-buffer" method should be applied during write operation.
Verify readback should be performed when Flash write operation completes. If write is detected during power loss (e.g., brown-out detection), system should start in safe mode at boot. Write counters should be kept for EEPROM life limit (e.g., 100,000 cycle).
This item is compatible with IEC 60730-1 Annex H.11 – EEPROM Data Integrity and ISO 26262 Part 6 requirements.
19. Is there a protection mechanism for "program gone wild" situations?
Unexpected branches, infinite loops, or pointer errors in software can lead system into unpredictable behaviors. Therefore, every system should have a "self-recovery" mechanism defined.
- Watchdog timer: Provides automatic reset in system lockups.
- Stack overflow guard: Stack limit check should be performed (e.g., stack canary, MPU).
- Memory Protection Unit (MPU): Access boundaries for code and data areas should be defined.
- Exception handler: Should create error log in undefined situations (e.g., HardFault_Handler()).
- Recovery logic: System should enter "safe state" mode after reset.
This item is directly compatible with IEC 61508-3 Clause 7.4.7 – Defensive Programming and AUTOSAR Safety Platform principles.
Communication and Real-Time Behavior
20. Are there timeout controls in communication protocols?
In communication protocols such as UART, I²C, SPI, CAN, Ethernet, or BLE, system wait time in cases where no response is received should not be unlimited. Timeout control eliminates the risk of infinite loop or CPU lockup.
Each communication call (read(), write(), transfer()) should be limited with a maximum wait time (timeout). Timeout duration should be determined according to communication speed and environmental conditions: UART (100–500 ms), I²C (10–100 ms), SPI (1–10 ms), Ethernet (1–3 s).
System should enter "retry" or "fail-safe" mode in timeout situation. In real-time systems, wait time should be implemented with osDelay() or EventFlag mechanisms compatible with RTOS task structures.
This item meets IEC 61508-2 Clause 7.4.9 – Communication Error Detection requirement.
21. Are all communication errors logged or error handled?
Errors occurring in communication layer are not just temporary situations; they are usually indicators of hardware noise, misconfiguration, or EMI-based systematic problems. Therefore, each error type should be logged and system behavior should be changed when necessary.
Error types to be detected: CRC / Checksum error, Framing / Parity error (UART), ACK/NACK (I²C), Collision (Ethernet, CAN), Timeout / Buffer overflow. Errors should be logged to a log area in ring buffer structure or flash-based error log. Each error should be stored with a numerical code (ERR_COMM_TIMEOUT = 0x02) and timestamp. System should enter warning or safe mode when error count exceeds threshold value.
This item is compatible with ISO 26262-6 Clause 9.4 – Fault Tolerance and Monitoring standard.
22. Is CPU utilization measured?
High CPU utilization can cause delays in real-time tasks or system entering unpredictable behaviors. Therefore, CPU load should be measured regularly and kept below limit values. Target range: Average CPU load below 70%, maximum below 80%.
Measurement methods: "Idle Task Hook" function in RTOS-based systems, Hardware counters (SysTick, DWT_CYCCNT), Profiling tools (FreeRTOS Tracealyzer, SEGGER SystemView, STM32CubeMonitor). Load analysis results should be added to "CPU Profiling Report" document before production. If high CPU utilization is detected, task priorities or time slices should be rearranged.
This item supports ISO/IEC 25010 – Performance Efficiency criterion.
23. Is interrupt response time measured?
Interrupt latency in real-time systems can cause critical events to be missed. Therefore, ISR latency should be measured periodically and compared with system requirements. ISR latency = Time between interrupt occurrence and ISR start.
Measurement methods: "Trigger → ISR entry pin" delay using logic analyzer or oscilloscope, "Trace hook" or DWT_CYCCNT measurement in RTOS systems. Acceptable range: High-speed control systems (shorter than 10 µs), General embedded applications (shorter than 100 µs). ISR latency increase can be caused by excessive CPU consumption by high-priority tasks.
This item supports DO-178C Table A-7 and ISO 26262 Part 6 Clause 9.4.3 requirements.
24. Is ISR execution time measured?
Interrupt service routines (ISR) taking longer than necessary causes other tasks in the system to be blocked. This situation can cause "priority inversion" or "missed event" errors. Only the shortest operation should be performed inside ISR, detailed operations should be transferred to main loop as "deferred task".
ISR duration should be measured according to processor clock cycle (e.g., DWT_CYCCNT or toggled GPIO). Acceptable ISR duration: should not exceed 5% of total system period. If ISR duration is exceeded, function should be divided into two stages: ISR (fast): sets event flag, Handler (slow): processes data. ISR profile should be documented under "Timing Verification Report".
This item meets IEC 61508-3 Clause 7.4.11 – Execution Time Limitation requirement.
25. Is there a version number field in data structures for version detection?
Incompatibility with old data formats (legacy format) can occur during software updates. Therefore, each data structure should contain a version number (struct version field).
Example structure:
typedef struct { uint8_t version; uint16_t crc; float calib_value; } system_config_t;
When version changes, software should automatically convert (migration) or reset. This prevents system crash in EEPROM/Flash content changes. Version control should be added to logs and configuration reports.
This item is directly compatible with "Data Compatibility & Upgrade Integrity" requirement in IEC 62304 – Medical Device Software Lifecycle standard.
Memory Management and Hardware Compatibility
26. Are FIFO, buffer, stack, and heap usage limits monitored?
Memory overflows are the most common and hardest to detect errors in embedded software. These errors cause system random resets, data corruption, or unpredictable behaviors. Size limits (BUFFER_SIZE) should be defined with compile-time constants for FIFO and buffer structures.
Stack monitoring: Separate stack area should be determined for each task in RTOS-based systems and stack guard should be enabled, System should be redirected to watchdog reset or fault handler in stack overflow situation.
Heap monitoring: Dynamic memory (malloc, free) usage should be kept to minimum, Memory allocation failures should be checked (if (ptr == NULL)), appropriate error message should be given, Heap fragmentation should be profiled regularly.
Stack/Heap limits should be monitored with post-compilation analysis tool: ARM Keil uVision Map Report, GCC map file, FreeRTOS Tracealyzer.
This item is compatible with ISO 26262-6 Clause 7.4.11 and CERT-C MEM35-C: Allocate and Free Memory Safely principles.
27. Are "volatile" and "const" definitions used correctly in critical functions?
Compiler optimizations can sometimes lead to unexpected results due to incorrect variable definitions. Especially in hardware accesses and critical data sharing areas, correct use of volatile and const is mandatory for code integrity. volatile: should be used for variables modified by ISR (interrupt service routine) or hardware registers (Example: volatile uint8_t uart_rx_flag;),
This prevents compiler from optimizing variable (e.g., reading from cache). const: should be used for tables, lookup data, or configuration constants that will remain constant in ROM, Provides memory protection, reduces RAM consumption. Incorrect use, especially situations like "non-volatile flag variable", can cause synchronization errors between ISR and main loop.
This item is directly related to MISRA-C 2012 Rules 8.7 and 8.13.
28. Is "odd address" usage checked in 16/32-bit microcontrollers?
Microcontrollers can generate bus fault or hard fault when memory access doesn't comply with alignment rules. This is critically important especially in stack pointer or struct alignments. Data should be 2-byte aligned in 16-bit systems, 4-byte aligned in 32-bit systems. attribute((aligned(4))) or #pragma pack(push, 4) can be used for alignment in C structures.
Stack pointer alignment should be checked at start: assert(((uint32_t)__get_MSP() % 8) == 0); Unaligned accesses can lead to data corruption especially in DMA or peripheral accesses. Alignment should be verified with "map" or "elf dump" analysis after compilation.
This item is compatible with ARM Architecture Procedure Call Standard (AAPCS) and ISO/IEC 9899:2011 (C11) §6.2.8 Alignment rules.
29. Are RAM and ROM usage reports generated?
How system memory resources are used should be reviewed after each compilation. This analysis enables early detection of risks such as code growth, stack overflow, or RAM overflow. .map, .elf, or memory summary files generated after compilation should be regularly examined.
Metrics to track: .text (code section) size → Flash usage, .data + .bss (RAM sections) → Runtime memory, Stack and heap total → Total RAM usage.
Target values: RAM usage less than 80% of total memory, Flash usage less than 90% of total space. These reports should be automatically archived in CI/CD system and associated with version.
This item is compatible with ISO/IEC 25010 – Resource Utilization and MISRA C:2012 Rule 8.10 principles.
Performance and Testability
30. Is LINT or similar static analysis tool used in software?
Static analysis tools detect errors before compilation stage, both shortening development time and increasing reliability. Especially in embedded systems, in addition to compiler warnings, using tools like LINT, cppcheck, Splint is recommended.
Typical detected errors: Memory leak, access violation, unused variables, possible overflows. Analysis should be run automatically in each compilation and results should be included in CI/CD pipeline. Report format (e.g., XML, HTML) should be archived and associated with software version. Target: Zero Critical Warnings Policy — 100% resolution of critical warnings.
This item meets MISRA-C:2012 Rule Compliance and ISO 26262-6 Clause 11.4.8 – Static Verification requirements.
31. Do functional tests cover all branches?
Functional tests guarantee that software works correctly not only in "correct state" but in all possible flows. In this scope, tests should achieve branch coverage target.
Statement Coverage: ≥ 95%, Branch Coverage: ≥ 90%, MC/DC Coverage (decision combination): ≥ 80% in safety-critical systems. Test scenarios should also include error cases (error paths).
Test tools: Ceedling, Unity, GoogleTest, gcov/lcov, VectorCAST. Test coverage report ("Coverage Report") should be generated for each software version.
This item is compatible with DO-178C Table A-7 and ISO/IEC/IEEE 29119-4 – Test Techniques standard.
32. Are performance measurements (loop time, interrupt latency, CPU load) documented?
Performance measurements are proof that system operates within real-time constraints. These measurements enable early detection of excessive resource usage or bottleneck formation.
Basic metrics to measure: Loop time (main loop period e.g., 10 ms ±2 ms), Interrupt latency (interrupt response e.g., shorter than 10 µs), CPU load (average 60%, maximum 80% limit). Measurement tools: DWT_CYCCNT, FreeRTOS trace hooks, SystemView, logic analyzer. Reports should be stored periodically in "Performance Summary" document.
This item is compatible with ISO/IEC 25010 – Performance Efficiency and AUTOSAR Timing Analysis principles.
33. Are warm-cold start, interrupt, and fault simulation tests performed?
Testing software under different startup conditions (cold/warm reset), interrupt traffic, or by injecting faults is the basis of resilience evaluation.
Test scenarios: Cold start (memory reset startup), Warm start (continue with preserved RAM), Fault injection (sensor, communication, or flash write error simulation), Interrupt storm test (process continuity under high interrupt density). These tests should be executed in automatable manner, not manually. Results should be documented under "Reliability Test Record".
This item meets IEC 61508-3 Clause 7.4.7 – Fault Insertion Testing requirement.
34. Are OTA (Over-The-Air) update and rollback strategy defined?
Remote software update (OTA) is a critical capability for functional development and bug fixes throughout product life. However, rollback mechanism is mandatory for a safe and robust OTA system.
Update process should only work with signed firmware. A/B partition or dual-bank memory structure should be used: New software should not be activated without verification, System should be able to revert to old version in update error situation.
Power loss scenario during update should be tested. OTA logs (start, result, checksum, version) should be archived.
This item is compatible with IEC 62443 – Secure Software Update and ISO/SAE 21434 Cybersecurity for Vehicles requirements.
35. Is software configuration management (Git branching, tagging) policy documented?
Configuration management is necessary to track which version of developed software is compatible with which hardware and test set. Git, SVN, or Mercurial should be used as version control system.
Branching model: main/master (production code), develop (integration), feature/, hotfix/ (temporary branches). Each version should be tagged (e.g., v1.0.3). Commit messages should be written in standard format: [MODULE] Fixed SPI timeout handling – Issue #123. Policy document should be stored as "Software Configuration Management Plan (SCMP)".
This item is compatible with ISO/IEC/IEEE 12207 Clause 6.2 – Configuration Control standard.
36. Is secure boot and signature verification mechanism available?
Secure Boot guarantees that device only runs authorized software. This is the basic protection layer against malicious code loading. Firmware should be digitally signed with RSA/ECDSA algorithm.
Bootloader should verify this signature at every boot. "Chain of Trust" structure: Boot ROM → Bootloader → Application. Private key under manufacturer control, public key stored in firmware. System should enter safe mode if signature verification fails.
This item is compatible with IEC 62443-4-2 and NIST SP 800-193 – Platform Firmware Resiliency standards.
37. Is data privacy (PII masking) and log filtering structure implemented?
Logs containing personal data or device credentials should go through appropriate masking or anonymization process.
Example masking: Device ID: 1289, User: U@mail.com. Log levels (INFO, WARN, ERROR, DEBUG) should be filterable. Data access in log system should be limited with role-based control (RBAC). Critical logs should be signed for integrity control.
This item is compatible with GDPR / KVKK compliance and ISO 27001 – Information Security Management principles.
38. Is there timestamp and serial number association for error logs?
The most important parameter in root cause analysis of errors is when and in which product the event occurred. Each log line should contain the following information: Timestamp (in UTC format), Device serial number, Software version, Error code and brief description.
Example: [2025-11-12 14:32:01Z] SN:10293 V1.04 ERR:0x12 – I2C Timeout. Log records should be stored in CSV or binary format, circular structure should be used in systems with over 1000 records.
This item meets ISO/IEC 27035-1 – Incident Management criteria.
39. Is cybersecurity vulnerability scan (CVE, static analysis) periodic?
Known vulnerabilities (CVE – Common Vulnerabilities and Exposures) in libraries, RTOS components, or open-source code need to be monitored. Scanning tools: NVD, Snyk, OWASP Dependency Check, GitHub Advisory Feed. Vulnerability reports should be reviewed in each major version. Emergency patch plan should be activated when critical vulnerability (CVSS ≥ 7.0) is detected. This process should be integrated into CI/CD pipeline.
This item is compatible with ISO/SAE 21434 – Cybersecurity Management System standard.
40. Is SBOM (Software Bill of Materials) created and stored?
SBOM is the list of all dependencies (library, driver, middleware) that make up software content. It is becoming mandatory in supply chain security and maintenance processes.
SBOM should be kept in JSON or SPDX format. Following information should be available for each item: Component name and version, License type (MIT, BSD, GPL, etc.), Source repository (URL), Security status (CVE information). SBOM file should be archived with version, changes should be updated through PCN/ECO process.
This item is compatible with NTIA SBOM Framework and ISO/IEC 5230:2020 – OpenChain requirements.
Note: This checklist is prepared for use in professional embedded software development processes. Each project may contain its own specific requirements; you can expand or customize this list according to your needs.