Cybersecurity & Privacy Checklist
This checklist is prepared to ensure cybersecurity, data privacy, and integrity protection in hardware and embedded systems. The goal is to ensure that device is resilient to malicious access, unauthorized modifications, and data breaches throughout process from production to end user.
Security Strategy and Policies
1. Are security objectives defined for product (Security Objectives)?
Protection goals of product or system (CIA triad: Confidentiality, Integrity, Availability) should be clearly defined. For each objective:
- Protection level (High/Medium/Low)
- Asset type (data, credentials, communication channel, hardware)
- Protection method (encryption, access control, verification)
- Responsible party should be determined
These objectives should be included in Security Requirements Specification (SRS) document and updated with design changes.
Defining objectives meets ISO 27001 Article 6 and IEC 62443-4-1 requirements.
2. Is threat model (Threat Model) and risk analysis (Risk Assessment) created?
All potential attack vectors targeting product should be identified and threat model should be created. This model should cover system's physical, network, software and user layers.
Example analysis tools and approaches:
- STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege)
- DREAD risk scoring
- Attack Tree or Data Flow Diagram (DFD) based analysis
Probability, impact and preventive control should be defined for each threat.
Risk assessment should be documented according to ISO 27005 and NIST SP 800-30 methodologies.
3. Is security officer or "Product Security Owner" assigned for product?
A person responsible for security should be assigned in each product group: defined as Product Security Owner (PSO) or Security Champion.
Responsibilities:
- Tracking security requirements throughout version
- Executing incident response process
- Approving security test plans
- Managing vulnerability in customer feedback
PSO role should be active throughout product lifecycle and included in design changes, firmware updates and PCN processes.
This practice supports "security governance" structure defined in IEC 62443-4-1 and ISO 27034 standard.
4. Are security requirements included in design cycle (Secure-by-Design)?
Security should be integral part of design, not feature added later. Therefore, development process should be conducted according to Secure-by-Design principles:
- Considering threat model in requirement definition phase
- Applying security principles at source code level
- Using secure boot, physical access protection, encryption hardware (TPM, HSM, PUF) in hardware design
- Making code reviews and security tests mandatory
This approach is direct requirement of IEC 62443-4-1 Secure Development Lifecycle (SDL) model.
5. Is plan prepared for security tests (Security Test Plan)?
Written Security Test Plan should be prepared for security tests before product is released to market. Plan should include following components:
- Test scope (physical, software, network, cloud infrastructure)
- Test types (penetration test, fuzzing, code review, configuration validation)
- Test frequency and responsibilities
- Tools to be used (e.g., Metasploit, OWASP ZAP, Nmap, Binwalk)
Test results should be stored as Security Validation Report and product should not be put into production without resolving critical vulnerabilities.
This plan should comply with ISO 27034-1 and NIST SP 800-115 (Technical Guide to Information Security Testing) standards.
Authentication and Access Controls
6. Is access control defined on all interfaces (UART, JTAG, USB, WiFi, BLE, Web API)?
Authorization policy should be created for all interfaces on product.
- Development interfaces (UART, JTAG, SWD) should be open only on engineering samples; should be disabled or require password / key-based access on production units
- Wireless interfaces (Wi-Fi, BLE) access should be protected with strong authentication (WPA3, BLE LE Secure Connections)
- API-Key or OAuth 2.0 token should be mandatory for Web API and cloud services
Effectiveness of all these interfaces should be verified in end-of-production test and recorded in "Interface Security Matrix" document.
7. Is there default password or username?
Product should never contain default username or password.
- Password setting requirement should be imposed on user at initial setup
- At least SHA-256 or modern functions like bcrypt / Argon2 should be used as password hash algorithm
- Passwords should not be stored in plaintext; secure salting should be applied
- Password policy (min 8 characters, mixed structure, periodic change) should be mandatory on devices with administrator (admin) access
This approach meets ETSI EN 303 645 Article 5.1 ("No universal default passwords") requirement.
8. Is user authentication mechanism strong?
Multi-layer authentication should be applied for all user and system accesses.
- Minimum requirement: authentication protected with hash + salt + nonce
- Session ID protected with TLS 1.2+ on network-based systems
- Two-factor authentication (2FA) or hardware token (YubiKey, OTP) support on critical systems
- Password storage and management should be done in line with OWASP IoT Security Guidelines
Failed session attempts should be limited with lock-out mechanism (e.g., 5 attempts → 10 minute lock).
9. Are access permissions (authorization) defined role-based (RBAC)?
Access rights in system should be organized with Role-Based Access Control (RBAC) principle.
Example roles: Admin, Service Technician, User, Guest. For each role:
- Allowed operations
- Command or API access limits
- Configuration change permissions should be clearly defined
Permission management should be executed through central directory (e.g., LDAP, OAuth claims) or local access table. RBAC structure should be preserved in firmware updates, privilege escalation should be verified with tests.
10. Is secure session management (session timeout, token refresh) implemented?
Session security is basic requirement in applications and web interfaces.
- Sessions should be automatically terminated after certain time (idle timeout 15 minutes or shorter)
- Access tokens should be refreshed at certain intervals (refresh token flow)
- Tokens and cookies should only be stored with HTTPS Secure flag + HttpOnly parameters
- Multiple session detection or concurrent login restriction should be implemented
- Session IDs should be random and have sufficient entropy UUID v4 or 256-bit random value
These items comply with OWASP Session Management Cheat Sheet.
11. Are physical access points (debug port, SD card, service connector) sealed or access restricted?
Physical ports and maintenance connections on device should be protected against unauthorized access.
- Debug ports (UART, JTAG, SWD) should be disabled or sealed with epoxy / metal cover in production versions
- Service connectors should be accessible only with special adapters that authorized technicians will use
- Hardware lock (write-protect) and digital signature verification should be applied for SD card or removable media slots
- Device should lock itself or create event log in case of physical opening (tamper)
These measures comply with IEC 62443-4-2 CR 3.5 and NIST SP 800-88 physical security principles.
Secure Boot and Firmware Protection
12. Does bootloader perform signed firmware verification?
Digital signature firmware verification should be performed at device's bootloader stage.
- Each firmware image should be digitally signed with RSA-2048 or ECDSA-P256 algorithm
- Bootloader should verify firmware's correctness through manufacturer's public key
- If signature verification fails, system should not start and should enter safe mode
This method completely prevents unauthorized or malicious firmware installations (malware injection).
Standard reference: NIST SP 800-193 (Platform Firmware Resiliency), ETSI EN 303 645 Article 5.7.2.
13. Is firmware integrity verified with hash control (SHA-256 / HMAC)?
Firmware file's integrity should be verified with hash algorithm during boot and update phase.
- Minimum requirement: SHA-256, preferred method: HMAC-SHA256 (keyed verification)
- Bootloader should verify hash value in addition to firmware's signature
- Hash should be stored in firmware metadata (manifest) and associated with signature file
This control eliminates data corruption or manipulation risks.
14. Is secure boot chain (Chain of Trust) completed?
Entire boot process should proceed through secure Trust Chain structure:
- Boot ROM: Embedded in hardware and unchangeable, verifies bootloader signature
- Bootloader: Verifies firmware signature
- Application: Started only from verified image
This chain is based on principle that each stage cryptographically verifies previous one. If hardware supports, Root of Trust (RoT) or TPM/PUF-based key storage should be used. System should not be allowed to operate without completing chain.
This implementation directly meets IEC 62443-4-2 CR 3.1 – Secure Boot Integrity Verification article.
15. Are firmware updates (OTA / local) done through encrypted channel?
All firmware updates should be done through secure and encrypted communication channels:
- TLS 1.2 or TLS 1.3 is mandatory for OTA (Over-the-Air) updates
- For local updates (USB, SD card), file encrypted in AES-256-CBC or GCM mode should be used
- Updates should be verified with signed manifest file
- Unauthorized update attempts should be logged and blocked
This approach complies with secure update requirements defined in NIST SP 800-147B and ETSI EN 303 645 Section 5.5 standards.
16. Is rollback mechanism available for updates?
Device should be able to automatically return to previous stable version after failed or corrupted updates.
Rollback mechanism:
- Should work through two firmware partitions (A/B partition) or dual image system
- If new version cannot be verified, system should automatically start "last known good" version
- Rollback operation should be logged and notified to user
This feature reduces brick risk and maintains system continuity. Standard correspondence: PSA Certified Security Model – Firmware Update Resilience.
17. Are firmware version number and signature status stored in system logs?
Following information should be recorded in log system after each firmware installation or update:
- Firmware version (v1.0.3 etc.)
- Signature verification result (PASS / FAIL)
- Update source (OTA / USB / local)
- Date–timestamp
- User or system identity
These logs should be kept only in read-only memory or secure log storage. Critical log records should be signed or protected with HMAC.
This requirement complies with ISO 27001 A.12.4 (Logging and Monitoring) and ETSI EN 303 645 Section 5.8.
Data Protection and Privacy
18. Is data collected by device minimized (Data Minimization)?
Product should collect only functionally necessary data. Following analysis should be done for each data type:
- Purpose: For which function is this data collected?
- Necessity: Can device work without this data?
- Risk: Will user or system be affected if data leaks?
Unnecessary data collection should be prohibited, verified with "Data Inventory" and "Privacy Impact Assessment (PIA)" documents.
This approach meets GDPR Article 5(1)(c) and KVKK Article 4(2)(ç) ("Data processing should be limited to purpose") requirement.
19. Are personal or sensitive data stored encrypted?
All personal, financial, location or identification information should be stored encrypted on device.
- Minimum: AES-128-GCM, preferably AES-256-GCM or ChaCha20-Poly1305
- Keys should be kept in hardware-based secure area (TPM, Secure Element, PUF)
- Separate file system (e.g., LUKS, eCryptFS, or AES-XTS disk partition) can be used for encrypted data areas
- Sensitive data (e.g., password, token, location) should never be stored in plaintext
This implementation complies with ISO 27001 A.10.1 (Cryptographic Controls) and IEC 62443-4-2 CR 3.4 standards.
20. Is data transmission done with secure protocols like TLS, DTLS or VPN?
All data communication should be done through secure channel.
- TLS 1.2 / 1.3 or DTLS 1.3 should be used in IP-based communication
- IoT protocols like MQTT, CoAP should be protected with TLS or DTLS layer
- VPN, IPSec or WireGuard integration should be preferred for end-to-end connections
- Certificate verification (certificate pinning) should be active; self-signed certificates are prohibited in production systems
- Old and weak protocols (SSLv3, TLS 1.0, RC4, MD5) should definitely be disabled
This control directly meets ETSI EN 303 645 Article 5.5 ("Secure communications") requirement.
21. Do log files store personal data (PII) with masking?
Personal data in all system logs should be stored through masking or anonymization.
Example:
- IP addresses → 192.168.xxx.xxx
- Usernames → hashed_user_id
- Location data → rounded or regional-based storage
Logs should be kept in read-only area, signed (HMAC or ECDSA) and protected from unauthorized access.
This implementation complies with ISO 27018 (PII protection for cloud services) and GDPR Recital 26 (Anonymization) principles.
22. Is procedure defined for deletion or anonymization of user data?
User data should be permanently deletable from device, cloud or database.
Deletion process:
- User request (RTBF – Right to be Forgotten) received
- Data deleted or anonymized from manufacturer infrastructure, backups and device
- Deletion operation documented with audit trail
Anonymization should be done in way that person cannot be associated with data (e.g., hash + salt + random ID). This step meets GDPR Article 17 (Erasure Right) and KVKK Article 7 (Deletion, destruction, anonymization) requirement.
23. Is data retention period and deletion policy documented (Retention Policy)?
Retention period and deletion criteria should be defined for all data types.
Example:
- System logs: 90 days
- User activity records: 1 year
- Sensor measurement data: 3 years
- Personal data: deleted when user account closed
Policy should be in document titled "Data Retention Policy" and marked in ERP/PLM systems. Deletion operations should be proven with verification log (audit evidence).
This requirement complies with ISO 27001 A.18.1.3 (Retention of Records) and GDPR Art.5(1)(e) standards.
24. If product evaluated under GDPR / KVKK scope, are data owner rights (RTBF, export) supported?
Data access, correction, deletion and portability rights should be provided to user:
- RTBF (Right to Be Forgotten): Permanent deletion of data
- Data Export: Export of user data in open format (JSON, CSV)
- Access Request: User's right to query data related to themselves
- Consent Management: Tracking, withdrawal or re-granting of user consent
These features should be accessible in product's management interface or mobile application. Processing time and verification steps (e.g., identity verification) should be documented for each request.
This article directly complies with GDPR Chapter 3 (Art. 12–23) and KVKK Article 11 provisions.
Key Management and Encryption
25. Are cryptographic keys stored in secure hardware (TPM, HSM, Secure Element)?
All long-lived cryptographic keys (e.g., private key, session master key, firmware signing key) should be stored in hardware security modules:
- Chips like TPM (Trusted Platform Module) or Secure Element (ATECC608, STSAFE) should be used
- HSM (Hardware Security Module) should be mandatory for server-side key storage
- Keys should never be kept openly on flash or EEPROM
- Access should only be provided through hardware-internal APIs (PKCS#11, TEE, PSA Crypto API)
This method meets NIST SP 800-57 Part 1 Section 5.6.2.2 and FIPS 140-3 Level 2+ criteria.
26. Is key generation, storage and update policy defined?
Written Key Management Policy should be prepared for all key types (master, session, update, device key).
Policy content:
- Key generation method (TRNG/DRBG based)
- Storage location (TPM, HSM, Secure Element, Encrypted File)
- Lifetime and renewal period
- Distribution and authorization rules
- Backup (key escrow) and recovery procedure
- Revocation and destruction (key revocation/destruction) process
Policy document should comply with ISO 27001 A.10.1.2 and NIST SP 800-57 Part 2.
27. Are key exchange algorithms (ECDH, RSA 2048+) configured with secure parameters?
Algorithms used during key exchange should meet current security standards:
- RSA 2048+, ECDH (P-256, Curve25519) or X25519 should be used
- Old algorithms (DH less than 2048 bit, RSA 1024, MD5, SHA-1) should be completely prohibited
- Forward secrecy should be provided by preferring ephemeral key exchange (ECDHE)
- TLS profiles should comply with RFC 8446 (TLS 1.3) or RFC 7919
Additionally, algorithm parameters and certificate signatures should be regularly audited for security.
28. Does random number generator (TRNG/DRBG) rely on reliable sources?
All cryptographic operations should use reliable randomness sources.
- Hardware-based TRNG (True Random Number Generator) should be used primarily; if not available, NIST-approved DRBG (Deterministic Random Bit Generator) algorithm should be implemented
- Randomness quality should be verified with FIPS 140-2 Annex C or AIS 31 tests
- Entropy sources (thermal noise, ring oscillator, ADC jitter) should be monitored and test results logged in production log
Weak RNG sources can lead to key prediction and cryptographic vulnerabilities.
29. If encryption libraries are open source, have versions passed security audit?
All cryptography libraries used in product (e.g., mbedTLS, OpenSSL, WolfSSL, libsodium) should pass certain version security checks:
- Version control: latest LTS (Long Term Support) version should be used
- Security vulnerabilities (CVE) should be regularly monitored
- Weak algorithms and protocols should be disabled during compilation
- Libraries should be verified according to "reproducible build" principle
This control meets CWE-327 (Use of Broken Cryptographic Algorithm) and ISO 27002:2022 A.8.28 articles.
30. Does system support automatic revocation (key revocation) in case of key leak?
In case of key leak or unauthorized access, system should have mechanism to automatically revoke related keys.
- Revocation list (CRL) or OCSP should be supported
- Device should be able to receive new key/certificate during update or reconnection
- All sessions and signatures should be automatically revoked after leak detection
- Manufacturer infrastructure should have "Key Compromise Response Plan" document
This structure complies with ETSI EN 303 645 Article 5.7.3 and NIST SP 800-57 Part 3 requirements.
Network and Communication Security
31. Does device use secure protocol on network interfaces (Ethernet, Wi-Fi, BLE, 4G)?
Unencrypted or anonymous protocols should be prohibited on all network connections.
- Ethernet / IP based: only TLS 1.2 or TLS 1.3; HTTPS instead of HTTP, SFTP/FTPS instead of FTP should be used
- Wi-Fi: minimum WPA2-PSK (AES) or preferably WPA3-SAE; WEP and TKIP should be completely disabled
- BLE: only "LE Secure Connections" mode should be active, 6-digit PIN or OOB key should be used during pairing
- 4G/LTE: operator SIM authentication should be active, APN access should be limited
Device's network configuration should be verified with "Network Interface Security Checklist" document.
32. Are all open ports and services inventoried and unnecessary ones closed?
All services (TCP/UDP) running on device should be identified and only mandatory ones should be left open.
- Port scanning (Nmap, Masscan) tests should be regularly conducted
- Weak services like Debug, Telnet, FTP, SNMP v1/2c should be completely removed
- Service listening addresses should be limited only to internal interfaces (localhost, management VLAN)
- "Network Exposure Report" should be prepared and updated in each version
This control directly meets OWASP IoT-1: Weak Network Services article.
33. Is encryption protocol (TLS) certificate chain and validity period monitored?
For TLS certificates between device and cloud:
- Validity period (expiry date) should be automatically monitored, renewal plan should be created
- Certificate chain should be complete (root → intermediate → server)
- RSA 2048+ or ECDSA P-256 key length should be used
- Self-signed certificates should only be accepted in development environments
OCSP or CRL check should be active in production systems.
This requirement complies with RFC 5280 and ETSI EN 319 411 certificate management standards.
34. Is certificate pinning implemented against DNS spoofing or MITM attacks?
Device should have certificate pinning or public key pinning (HPKP) support on client side.
- Each connection should be verified only with predefined CA or public key
- Certificate fingerprint (SHA-256 fingerprint) check should be done against DNS spoofing, MITM and proxy attacks
- When certificate change is planned, new fingerprint should be distributed in advance via OTA
This implementation complies with NIST SP 800-52 Rev.2 and OWASP Mobile Top 10 M3 (Insecure Communication) guidelines.
35. Is server API access authenticated (API Key / OAuth 2)?
Each API call in device's communication with cloud or external systems should be authenticated.
- API Key, JWT or OAuth 2.0 Bearer token should be used
- Token lifetime should be limited (e.g., 24 hours) and have refresh (refresh token) mechanism
- With "Least Privilege" principle, each token should only access specific endpoints
- API logs should be signed together with identity, time and operation type
This step meets ISO 27034 (App Security Framework) requirement.
36. Are MQTT/CoAP protocols in cloud connections protected with TLS or PSK?
IoT communication protocols should use secure transport layer:
- MQTT: TLS 1.2+ or PSK-based session (MQTTS, port 8883)
- CoAP: only DTLS 1.3 or OSCORE (RFC 8613)
- Topics should not contain user identity or confidential data
- Broker identity should be verified with certificate, anonymous connect should be disabled
This control directly meets ETSI EN 303 645 Article 5.5 ("Secure communications") standard.
37. Is rate limiting and brute-force protection active against network attacks?
Device or server should have following protections to reduce brute-force and DoS attacks:
- Rate limit for login and API calls (e.g., 10 requests / minute)
- Failure threshold in login attempts (5 attempts → 10 minute lock)
- Firewall / IDS rule against SYN flood, UDP amplification and ping attacks
- Dynamic blocking system like fail2ban at network layer
- SIEM or log analysis for anomaly detection (e.g., ELK, Splunk)
This approach meets IEC 62443-4-2 CR 7.1 – Denial of Service Protection requirement.
Software Integrity and Vulnerability Management
38. Is SBOM (Software Bill of Materials) created for all third-party libraries?
All external components, open source libraries and dependencies used in product software should be defined in SBOM (Software Bill of Materials) list.
SBOM content:
- Library name, version and license
- Source repository (repository URL)
- Usage area in product version
- SHA-256 checksum and integrity verification
SBOM should be updated with each version and included in production files.
This requirement complies with NTIA SBOM Framework, ISO/IEC 5230 (OpenChain) and US Executive Order 14028 standards.
39. Are library versions regularly scanned for vulnerabilities (CVE/NVD)?
All open source and third-party libraries should be regularly scanned against vulnerability databases (CVE/NVD, GitHub Security Advisories, OSS Index).
- Scanning tools: Trivy, Snyk, Anchore, Dependency-Check
- Automatic CI/CD pipeline integration should be done
- Automatic notification and blocking should be applied in critical vulnerability (CVSS 7.0 or higher) detections
"Vulnerability Scan Report" should be prepared before each version and pass quality approval.
This implementation meets OWASP SAMM Governance 2.3 and NIST SP 800-218 (SSDF PRM.3.2) requirements.
40. Is security advisory or automatic notification system available for CVE tracking?
Organization should use security advisory service or automatic CVE notification system.
- Daily or weekly CVE feeds (NVD JSON, GitHub Security Alerts) should be monitored
- CVE tracking officer should be assigned for critical components (e.g., OpenSSL, glibc, BusyBox, Linux kernel)
- When new vulnerability announced, impact assessment and remediation plan should be done within 72 hours
This approach complies with ISO 30111 (Vulnerability Handling Processes) and FIRST PSIRT Guidelines standards.
41. Are vulnerability patches (security patches) applied in planned updates?
Software updates should include not only functional changes but also security patches.
- "Security Patch List" should be created in each new firmware version
- Version should not be released without including critical CVE patches in planned updates
- Minimum security update policy should be defined for legacy products (e.g., 5 years support)
This article directly meets ETSI EN 303 645 Section 5.3 ("Keep software updated") requirement.
42. Is software signing process (code-signing pipeline) audited?
All software and firmware builds should pass through signing pipeline:
- Code signing process should run on HSM or "signing server" accessible only by authorized persons
- Signing keys (private keys) should never be on developer machines
- Signed build outputs should be tested with "Signature Verification Script"
- Code signing process should be reviewed by internal audit at least once a year
This article complies with NIST SP 800-218 (SSDF RVF.1) and SLSA Level 3+ requirements.
43. Is build environment (build system) secure and traceable?
Build infrastructure (CI/CD pipeline, build server, container, toolchain) should be protected against unauthorized access, malicious code insertion and version manipulation.
- Developer accesses should be limited with RBAC and MFA
- Build environment should only build from signed, verified source code
- Hash value (checksum) and signature verification should be done for each build output
- Build logs should be archived in immutable format
- Source code, binary and artifact consistency ("reproducible builds") should be tested in CI/CD environment
This measure complies with supply chain security principles defined in NIST Secure Software Development Framework (SSDF PM.2, RVF.2) and Google SLSA Level 4 standard.
Incident Management and Traceability
44. Is log system active for security events?
All security events occurring in device, network and cloud services should be recorded.
Log system should monitor at least following events:
- Failed login attempts
- Access denial or unauthorized API call
- Unauthorized firmware installation / configuration changes
- File integrity violation (hash mismatch)
- Abnormal network traffic or service loading
Logging level (info/warning/error/critical) should be determined according to product type, timestamp should be verified with NTP or RTC synchronization.
This article complies with IEC 62443-4-2 CR 6.1 "Audit Log Capability" standard.
45. Are log files digitally signed or stored protected?
All log files should be protected against manipulation.
- Log records should have integrity verification with HMAC-SHA256 or ECDSA signature
- Log files should be written to read-only area (read-only partition / WORM storage)
- Each log block should be verifiable by containing timestamp, device identity and signature chain (hash chain)
- Exported logs should be transmitted encrypted (AES-GCM) and read only by authorized service
This approach meets NIST SP 800-92 (Guide to Computer Security Log Management) requirements.
46. Is incident notification and security breach (incident response) procedure written?
Organization should have official Security Incident Response Plan (Incident Response Plan – IRP).
IRP content:
- Incident classification (critical, high, medium, low)
- Notification chain and responsibilities (Product Security Owner, IT, management)
- Initial response steps (containment, eradication, recovery)
- Post-incident analysis and documentation (post-mortem)
- Improvement and preventive action plan
Plan should be tested with exercise at least once a year; PSIRT (Product Security Incident Response Team) organization should be active.
Standard compliance: ISO/IEC 27035-1:2016 and FIRST PSIRT Framework.
47. Is notification mechanism to user or manufacturer defined for critical security errors?
When system detects critical vulnerability or security breach, it should be able to send notification to both user and manufacturer.
- User alerts: via screen, LED, mobile notification or email
- Manufacturer notifications: to PSIRT system via automatic log transfer, SNMP trap, MQTT event or REST API
- Incident class: firmware integrity error, failed update, repeated brute-force attack etc.
Notification system should meet ISO/IEC 30111 (Vulnerability Handling) and ETSI EN 303 645 Section 5.6 ("Report of security vulnerabilities") requirements.
48. Are security logs collected anonymized on central server (SIEM)?
Device logs should be transferred to central SIEM (Security Information & Event Management) infrastructure.
- Log transmission should be done via TLS 1.2+, Syslog-TLS (RFC 5425) or MQTT/HTTPS
- Personal data (PII) should be sent anonymized or masked
- SIEM correlation rules should automatically detect anomalies (e.g., 5 failed logins / 10 min)
- Solutions that can be used: ELK, Splunk, Graylog, Wazuh, Sentinel
This structure complies with ISO 27001 A.12.4 (Logging and Monitoring) and IEC 62443-4-1 SR 6.2 (Security Event Monitoring).
49. Are security updates clearly stated in version notes (Security Changelog)?
Security improvements made in each firmware or software version should be clearly documented:
- CVE numbers and fix descriptions should be specified in "Security Changelog" or "Release Notes"
- Risk level (CVSS score) and affected component list should be provided for critical level patches
- Users should transparently see which vulnerabilities are closed in new version
This implementation meets ISO 29147 (Vulnerability Disclosure) and ETSI EN 303 645 Section 5.3 ("Keep software updated") principles.
Security in Production and Service Phase
50. Is firmware loading channel secure during production (Secure Flashing)?
When loading firmware or configuration files on production line, unauthorized access or manipulation risk should be prevented.
- Firmware loading (flashing) operations should only be done via TLS/SSH or encrypted local connection (AES-256)
- Loading tool should perform signed firmware verification (signature verification)
- Firmware files should not be stored outside production network, access should only be provided by authorized personnel with MFA (Multi-Factor Authentication)
- Verification hash (SHA-256) after flash operation should be automatically recorded
This implementation meets IEC 62443-4-1 SR 5.4 "Secure Delivery & Integration" standard.
51. Are device serial number and security identity (UID) associated and recorded?
Each device should be identified with unique serial number (SN) and hardware security identity (UID / Device ID / PUF).
- These two identities should be matched and recorded in database during production
- UID information should be used in device's cryptographic authentication (e.g., mutual TLS, device attestation)
- UID should never be shown openly or exportable to user
This association is mandatory for traceability and secure supply chain management.
Standard reference: ISO/IEC 20243 (Open Trusted Technology Provider Standard – OTTPS).
52. Do service and maintenance software only work on signed devices?
All service, configuration or test tools should only interact with signed and verified devices.
- Service software should not operate without checking device identity (UID or certificate)
- Device–service communication should be protected with mutual TLS or hardware signature verification
- Service software should also be signed (code-signing certificate) and run only on secure platforms
- Service operation should be rejected on devices with non-matching device identity or unauthorized devices
This article complies with IEC 62443-4-2 CR 3.3 "Integrity of Information" standard.
53. Are debug ports permanently disabled after production tests (Lock Bits)?
After post-production quality tests are completed, all debug interfaces should be permanently disabled.
- MCU security bits (Lock bits, Read-out Protection, Security Fuse) should be activated at end of production
- This operation should be done after firmware verification test; logged with timestamp in production record
- Ports like JTAG, SWD, UART should be inaccessible at software or hardware level after testing
- If service access required, special "signed unlock token" mechanism should be used
This implementation eliminates CWE-1191 (Improper Restriction of Debug Interface Access) risk.
54. Is physical/juried access condition defined for transition to service mode?
Transition to service or special maintenance modes should only be possible with physical access and verified identity.
- Service mode should be activated by opening device, using special connector or PIN combination
- Software-based service access (remote mode) should only be opened with signed command and temporary token
- Transition to and exit from service mode should be recorded in all log systems
- On critical products (e.g., medical, industrial control), automatic safety interlock should be done during service mode entry
This rule complies with ISO/SAE 21434 (Cybersecurity for Road Vehicles) and IEC 62368-1 (Service Access Control) principles.
Note: This checklist is prepared to ensure cybersecurity and privacy requirements in hardware and embedded systems are met. Each item aims to apply security measures in all processes from design to production by referencing relevant international standards. You can expand or customize this list according to your product's specific requirements.