ARINC 653p2
Description
This project is to add ARINC 653 Part 2 to Deos and the ARINC 653 Part 1/4 environment. As various subportions of Part 2 are evaluated, this page should be updated with additional descriptions. For conformance with ARINC 653 Part 2, each subsection should be implemented in full, but is independent of other subsections. Therefore, this page is expected to grow as we evaluate different subsections of ARINC 653 Part 2.
The following service categories are specified as of Supplement 2:
- File System - Supported in Deos. See ARINC_653p2_File_System_On_CFFS.
- #Sampling Port Data Structures - On Hold
- #Multiple Module Schedules - Required for DesertEagle D2 release
- #Logbook System - Required for DesertEagle D2 release
- Sampling Port Extensions - Supported in Deos vis 653P1 runtime
- #Service Access Points - On Hold
- #Name Service - On Hold
- Memory Blocks - Supported in Deos via 653P1 runtime
- #Health Monitoring Extensions - On Hold
- #Queuing Port List Services - Required for DesertEagle D2 release
The following service categories were added in Supplement 3 (paired with P1 Supplement 4)
- #Multiple Processor Cores Extensions - On Hold
The following service categories were added in Supplement 5
- #Interrupt Services - On Hold
Sampling Port Data Structures
This section is not fully understood yet. On one hand this could be as simple as providing some data types. However, this could involve the ability to create configurations similar to chained items in IOI terms. We need to understand what this means before we can provide estimates and details on implementing it.
Estimate
TBD
Multiple Module Schedules
Multiple module schedules allows the window definitions in a major frame to be changed at runtime. The standard provides 2 use cases:
- Support different partition time windows during initialization to allow faster startup times, and then switching to a different schedule during normal operation.
- Support different partition configuration, so after a failure system can be reconfigured with different set of partitions and a different set of windows supporting those partitions.
Assumption is that I/O does not change based on changing WAT. Integrator should keep related partitions in related schedules. If I/O needs to be cleaned up, use a ScheduleChangeAction of WARM_START and then recreate the sampling/queuing ports so it calls ioi_open again.
Kernel Details
The kernel was designed with the idea that the Window Activation Table would need to be dynamically changed at some point in the future. The user API for switching the WAT has not yet been designed, but is believed to be about 1 month effort plus verification.
The idea would be that all WAT threads are created at startup if autocreated. However, WAT threads that are not part of the current WAT would not receive any time. If the WAT is changed, new threads may start receiving time, others may no longer be scheduled, and others will continue to run. Notification of the scheduleChangeAction required by 653 would be a 653 responsibility. It is not anticipated that a new interrupt will be used for this notification.
- There would need to be access control where not all WAT threads are allowed to change the WAT table.
- For 653 purposes the schedule change occurs at next major frame. Until then both current WAT and new pending WAT id are accessible by all partitions.
- WAT needs to be named, so we can get WAT handle given a name.
- Time of current WAT activation needs to be recorded and available.
The above notes are for WAT threads. Multicore now assigns threads to schedulers and schedulers to windows. A 653 partition is a min rate slack consuming thread and is the only thread in its scheduler. An integrator may want to exclude a set of RMA threads and not assign an initial window to that scheduler. Then once critical partitions are running switch the schedule to a set of windows allowing additional 653 partitions to execute or with different amounts of time, and potentially allocating time to an RMA scheduler. Need to consider if any RMA budget or rate changes are allowed, or possibly a rule that either guarantees scheduler has sufficient budget for its threads, or it has zero budget and is excluded from the current window activation table.
653 Runtime Details
- 3 new API's:
- SET_MODULE_SCHEDULE
- Need to map a SCHEDULE_ID to a WAT handle and tell kernel to switch.
- Window activation handler would need to detect schedule change and invoke ScheduleChangeAction (cold start, warm start, ignore). Prior to normal mode we don't have window activation registered, so that implies ignore which is the defined behavior.
- GET_MODULE_SCHEDULE_STATUS
- Simple query to kernel for window activation table info. 2 fields are for current WAT. Need access to know if there is a pending WAT change and its id.
- GET_MODULE_SCHEDULE_ID
- Returns the WAT ID for a given schedule name.
- SET_MODULE_SCHEDULE
Configuration Details
- Integ tool would have collection of WATs
- pd xml would identify which WAT window is associated with
- 653 Config tool xml extended to allow info to create pd xml with multiple WAT
- Most 653 rules are same, just enumerated in a list for each WAT schedule and accounted for if a partition is in that schedule or not.
Estimate
- Kernel - 4 weeks code. 12 weeks verf.
- 653 Library - 2 weeks code. 8 weeks verf.
- Tooling
- 653 Config Tool - 2 weeks
- Integ Tool - 2 weeks
- regcheck - 2-4 weeks
- OpenArbor Tooling - No impact.
Logbook System
A logbook consists of a RAM buffer and a NVM area. Multiple messages may be logged before being engraved in NVM. Message size can be variable, but space for a max message size is utilized. Messages accessed newest to oldest. A logbook is only accessed by one partition. A write status of IN_PROGRESS, COMPLETE, ABORTED is maintained.
APIs:
- CREATE_LOGBOOK
- WRITE_LOGBOOK
- OVERWRITE_LOGBOOK
- READ_LOGBOOK
- CLEAR_LOGBOOK
- GET_LOGBOOK_ID
- GET_LOGBOOK_STATUS
Design
- Should there be a customizable library responsible for this?
- What is the NVM?
- Is it board specific or built on top of CFFS and/or 653 P2 APIs
- Can customer allocate NVRAM to this purpose? Who divides per partition/logbook in that case?
- What is the NVM?
- Does CFFS have a new native feature?
- Part of media is configured for this purpose. P2 APIs become pass thru like some of the file system APIs.
Estimate
TBD
Service Access Points
The AFDX Driver supports AFDX SAP Ports. The underlying mechanism uses IOI. This maps well to the 653 implementation which uses IOI for queuing ports, and will help support the requirements that SAP port may send a message to a queuing port and vice versa. However, the AFDX driver does not support specifying a SOURCE address. Therefore, extended SAP messages will require additional afdx component support.
Note: If we wanted to do any socket based implementation, we would need an intermediary Deos process like the afdx driver, since sockets utilize thread synchronization kernel API's which would block the entire 653 partition and not a single 653 process. Otherwise, we would need to implement thread contexts in the kernel and have 653 processes use those context objects.
653 Library Details
- APIs
- CREATE_SAP_PORT
- Creates data structure in library, similar to CREATE_QUEUING_PORT. Calls ioi_open for the port.
- CREATE_SAP_PORT
- SEND_SAP_MESSAGE
- Handle the same as a 653 queuing port. Queue full is based on IOI queue. Destination address embedded in IOI message that is transmitted.
- SEND_SAP_MESSAGE
- RECEIVE_SAP_MESSAGE
- Handle the same as a 653 queuing port. Queue full is based on IOI queue. Extract source address embedded in IOI message that is received.
- RECEIVE_SAP_MESSAGE
- GET_SAP_PORT_ID
- Similar to queuing port service, gets id based on name.
- GET_SAP_PORT_ID
- GET_SAP_PORT_STATUS
- Similar to queuing port service. Gets num unread messages from IOI queue. Does not interact with AFDX driver.
- GET_SAP_PORT_STATUS
- SEND_EXTENDED_SAP_MESSAGE
- Will be similar to SEND_SAP_MESSAGE but the additional SOURCE address will be embedded in the IOI Payload.
- SEND_EXTENDED_SAP_MESSAGE
- RECEIVE_EXTENDED_SAP_MESSAGE
- Will be similar to the RECEIVE_SAP_MESSAGE but both SOURCE and DESTINATION will be returned.
Estimate
Standard SAP ports
- 653 Library - 2 weeks code. 6 weeks verf.
- 653 Config Tool - 3 weeks
- AFDX tooling - No impact
- OpenArbor - No impact
Extended SAP ports
Extended SAP ports are broken out separate due to the effort. Standard SAP ports are supported by the underlying tools and the only effort is within the 653 arena. Extended SAP ports require afdx driver work. Note: The following estimates assume that the work for Standard SAP ports has been completed or is being worked simultaneously.
- 653 Library - 2 weeks code + 6 weeks verf
- 653 Config Tool - 2 weeks
- AFDX driver - 2 months + verification
- AFDX driver config - 2 weeks
- AFDX driver cvt - 3 weeks
- 653 cvt - will be impacted if afdx driver config is generated and/or impacted
- OpenArbor - No impact
Name Service
Estimate
TBD
Health Monitor Extensions
This was added in Supplement 2.
Estimate
TBD
Queuing Port List Services
This was added in Supplement 2. It allows processing multiple receive queuing ports by a single process without polling each port (i.e. like a socket select API call). Behavior is defined such that the order ports are added to the list is the order serviced in round robin fashion. Can CONTINUE or RESET for subsequent invocations. Can have multiple lists, but a given port can only belong to one list.
APIs
- CREATE_QUEUING_PORT_LIST
- ADD_PORT_TO_QUEUING_PORT_LIST
- SET_PORT_ACTION_IN_QUEUING_PORT_LIST
- GET_QUEUING_PORT_ACTION_STATUS
- RECEIVE_MESSAGE_FROM_QUEUING_PORT_LIST
- WAIT_FOR_MESSAGE_FROM_QUEUING_PORT_LIST
Estimate
TBD
Multiple Processor Cores Extensions
Adds the following APIs
- SET_PROCESS_CORE_AFFINITY
- GET_PROCESS_CORE_AFFINITY
- SET_DEFAULT_PROCESS_CORE_AFFINITY
One issue in supporting this category is we do not intend to support CORE_AFFINITY_ANY (SMP) and the migration of processes.
Interrupt Services
Adds the following APIs
- CREATE_INTERRUPT
- GET_INTERRUPT_ID
- WAIT_INTERRUPT