CFFS A653P2
Arinc 653 Part 2 File System API wrapper for the CFFS Server.
Overview
Part 2 File System
Create a stand alone library that presents a 653 part 2 file system interface to the client. The following API interfaces will be included:
- CLOSE_DIRECTORY
- CLOSE_FILE
- GET_FILE_STATUS
- GET_VOLUME_STATUS
- MAKE_DIRECTORY
- OPEN_DIRECTORY
- OPEN_FILE
- OPEN_NEW_FILE
- READ_DIRECTORY
- READ_FILE
- REMOVE_DIRECTORY
- REMOVE_FILE
- RENAME_FILE
- RESIZE_FILE
- REWIND_DIRECTORY
- SEEK_FILE
- SYNC_DIRECTORY
- SYNC_FILE
- WRITE_FILE
Rfrost@ddci.com The current supplement of ARINC653 Part 2 is Supplement 2. This added two additional API's SYNC_DIRECTORY and RENAME_DIRECTORY. Will these be supported?
JKelley@ddci.com SYNC_DIRECTORY added above, see short list for problems with RENAME_DIRECTORY.
Part 2 Logbook System
Implement the Logbook feature within the same file system library. The following API interfaces will be included:
- CLEAR_LOGBOOK
- CREATE_LOGBOOK
- GET_LOGBOOK_ID
- GET_LOGBOOK_STATUS
- OVERWRITE_LOGBOOK
- READ_LOGBOOK
- WRITE_LOGBOOK
Staff
Configuration Management
Development work occurs in API URL
A spreadsheet describing each call's blocking strategy is located: Here
Problem Reporting
File System Development PCR7375
Logbook System Development PCR11821
Logbook System Development II PCR11890
Server Enhancement on-hold PCR9481
API Enhancement on-hold PCR9482
Version 1.3.0 Supports full 511 char path names PCR11533
Part 2 File System
Potential Limitations Discussion
This section describes the complexity of implementing an Arinc 653 File System on top of the Deos CFFS API. The intention is to document the issues, receive feedback and present a plan of approach to the customer.
Note that it is an option to replace the CFFS API with this library. The reason this may help is efficiency in keeping track of open file. CFFS API searches the iNode list for each operation. With an open API, this search would only need to be done once per open operation, instead of every read or write.
| Item | Issue | A653 Spec | CFFS Limitation or Work Around |
| 1 | Implement 653 Part 2 File System | N/A | Quoted as an API Library/Layer on top of CFFS. Desire is to NOT change the currently verified CFFS Server. Here. verified CFFS Server entails the stand-alone Deos executable cffsserver.exe and a companion shared library (libcffsapi.so) that is linked into each CFFS client process. |
| 2 | 653 Metadata Journaling | No journaling mentioned, but atomic operations expectation. i.e. writes complete before reads of same data and reads complete before writes of overlapping data. | CFFS Journals it's own metadata, but doesn't know about 653 metadata and therefore cannot natively journal it. Would be very complex to layer this capability, and would cause a significant performance hit to the file system. Recommend NOT journaling 653 metadata.
Rfrost@ddci.com I'm unclear on the impact. Will we not have atomic operations? JKelley@ddci.com Reads and Writes are limited by max Atomicity, requests that exceed this size get an error code returned. Reads and Writes will be Atomic. The decision here is to present 128 character path names and no time stamping so that journaling is not required. |
| 3 | Directory Levels | Minimum 30 levels, max 512 character fully qualified file names. Directory and file names up to 64 characters each. | CFFS only stores 128 character file names and has no native directory support. However, filenames can contain slashes, so directory support can be simulated within the 128 character limit. Also, CFFS Partition names can also hold 128 characters, so that if the client can predetermine the first few directory levels and specify them in the config files, those could be simulated as partition names.
Meeting the 653 spec would require a hashing scheme, with the real path/file names stored as metadata in a hidden file. To avoid data loss during power interrupts, this meta-data file would need a journaling mechanism so an interrupted creation of a new file would not corrupt the whole metadata file. Meeting this part of the spec is probably the most risky time/money wise. Recommend seeing if the client can live within the 128/256 character CFFS limitation. Alternative is opening the CFFS Server and implementing 512 character filenames which should be relatively straight forward (double iNodes). Rfrost@ddci.com I think we should propose our limitation for this release and see where we get. The server update seems better than the hashing if we needed to implement something. JKelley@ddci.com Agreed, this is the plan. |
| 4 | Directory Levels II | N/A | Instead of using a hidden file to store the real filename, this name could be made part of the file data. Basically, the first 512 characters of the file would contain the real name. This avoids the hidden file and journaling requirement, but could still result in loss of a particular files integrity. For instance, on a rename, the hashed name would change resulting in a CFFS metadata update, then the first 512 characters of the real file name requires another CFFS write. If one happens and not the other, this file could become lost or corrupt.
Bill Cronk: If we do the server update in 3 above, or the customer accepts our limitation in 3 above - is this still an issue? JKelley@ddci.com No Issue. |
| 5 | Config Files / XML | Appendix H lays out the Schema, kind of light on File System Details though. Mentions file system volumes and the user access and MaxNumberOfOpenFiles. | Volume and access are already controlled by the CFFS config file. We do need to know MaxOpenFiles, and possibly a Buffer Size though. Question is, do we need to create and use an XML Schema and associated toolset, or can we just push this small number of parameters to Deos Environment Variables, or add them to the existing CFFS formatted config file (existing, or new)?
Rfrost@ddci.com 653 users will expect the xml schema. The schema can be extended from the basic types. In general we have the 653 configuration tool take the 653 schema and produce the other files (current registry and IOI XML, and 653 binary config file). Could it produce the whole cffs config file or portions? JKelley@ddci.com Decision is to expand the xml schema to include file system needs. Possibly even producing the CFFS config and init files. |
| 6 | Volume Aliasing and Init | File system should be available when a 653 Partition starts. Configuration tables include items such as storage device identification, access rights, space allocation (quota), file descriptor allocation, and volume name aliasing (the name used by an application to reference the volume). These attributes will be applied to a portion of a storage device referred to as a volume. Creation and initialization (e.g., mounting) of the volumes are implementation dependent and are outside of the services provided in the ARINC 653 file system. When an application starts, the volumes it has been configured for should be available for use. | CFFS requires us to wait for it's Alive counter. There is no Init function in the 653 spec for the file system, and no Mount command. In looking at the 653 RTL already in place, it looks like a static global was used to keep track of whether the Init() has happened yet, so it can be called only once, during the first API call. Recommend the same for the file system. Init will attach to resources, and do a non-blocking check of the alive counter, then set the flag.
As far as Volume Aliasing the existing CFFS config file allows CFFS partitions to be named and allocated to Deos Processes (653 partitions). Question is, do we need to provide a config item to allow additional aliasing such as multiple aliases for the same volume, or Windows convention i.e. (C:\) as the volume name? Bill Cronk: The file system team will have to answer this question. JKelley@ddci.com Decision is to allow multiple aliasing via the xml. 653 API will unwind the aliases during file opening. Aliasing will be per-partition so different partitions can define their own aliases. JKelley@ddci.com Init implementation is OK as stated above, just need to advise customer, and document that the first file system API call may take several periods to complete as it waits for the CFFS Alive counter. This method is recommended instead of at library load time, as it gives the server (and partitions) time to start while the 653 partition make progress to Normal Mode. Ideally, the server is up and running before the first file system call anyway. Also, many of the file system API calls will require one or more periods to complete as the server needs time to complete the operation. |
| 7 | Access Rights | A single 653 Partition can have write access to a Volume, others can have read access. | A 653 Partition is the same as a Deos Process which can be given write access to a CFFS partition (volume) via the CFFS config file.
Found conflicting information in the spec regarding read access to volumes. In one place it states that by default, no one has read or write access to a volume unless specified in the config. Elsewhere it simply states that only one owner can be specified for write access, but others can read. CFFS allows anyone to read any volume(partition), blocking that would require additional config data (XML etc). This doesn't affect partitioning. Recommend just stating that those rigged to use the 653 file library can read anything the 653 file library manages. Rfrost@ddci.com This is a military program so they may have concerns about this. In general we're not support MILS or security profile yet, so we can see if this is acceptable. JKelley@ddci.com Decision is to enforce read protection in the 652 API as defined in the xml. This does not prevent an application from using the CFFS API to go directly after a file, in other words, this read protection is not space partitioned, and is in place only to meet the 653 spec. Going outside the 653 API is not recommended. |
| 8 | Blocking Calls | N/A | The 653 RTL requires our library to NOT make any Deos blocking calls. It will halt the whole 653 partition scheduler. The CFFS API uses a semaphore to protect its command queue. This is a very small portion of code and can be protected by wrapping CFFS API calls with a 653 LOCK_PREEMPTION call. In this way, the semaphore lock will always succeed. |
| 9 | Atomic Operations | Readers of a File should get consistent data either before a write happens, or after a write completes for a given file. | Sounds simple enough, but not so. Since we have no control of buffer size, or request size, write requests will inevitably be chopped up into several CFFS requests. Within a 653 Partition, the new library can know if a write on a file is in progress and act accordingly. If another 653 partition is trying to read the file, that would require inter-partition communication. That would have other ramifications such as what if a partition was writing to a file, told all other partitions this, then got restarted? That file would be locked out forever etc. Suggest a limitation that this Atomic updates are only guaranteed within Partition scope. Readers outside the owner partition are not guaranteed Atomic reads. If needed, push that on the client.
Alternatively, the server would need to be modified in a non-trivial way to keep track of update-in-progress of a file, and again, that could cause a lock-out if an update was in progress, and a partition restart failed to clear that flag. JKelley@ddci.com Resolved since requests cannot be larger than the max atomicity, they will not be chopped up, instead an error code is returned. Max atomicity will be set to the allocated buffer(port) size minus 512 bytes (to handle sector alignment). The buffer size is calculated based on memory resource (CAR) size divided by the xml defined max open files. |
| 10 | CAR Size | N/A | CFFS API uses memory mapped RAM for command and data buffers to the Server. The 653 library will additionally use some of this CAR RAM for it's own internal use such as current file position, etc for each open file.
mdiethelm@ddci.com The Client Access Resource's (CAR) internal arrangement is private to the CFFS, so perhaps the 653 file library could allocate its own RAM backed platform resource, memory object, or heap RAM via virtualAllocDeos()? JKelley@ddci.com Agreed, the 653 API will allocate RAM from process space, not the CAR. |
| 11 | File System Bandwidth | An implementation should provide means to allocate bandwidth to partitions. | CFFS already allows bandwidth allocation per partition via the config file. |
| 12 | Date/Time Created/Modified | Metadata should contain File creation and last modified Date/Times in a specified format. There is a flag so that if date/time was not available, the flag will indicate this. | Currently the 653 RTL's only time is nanoseconds since cold start. There is no RTC, or API's to go along with it. No way to set the time, or retrieve it. Suggest just setting the Not Available flag for this release.
JKelley@ddci.com Team agreed to present NO time stamping capability on file system, since no RTC capability exists at this time. APIs that return time stamps will set the Not Available flag. |
| 13 | Date/Time Created/Modified II | N/A | If/When an RTC is available, this metadata could become inconsistent with the file if a restart occurs. Since we don't desire to journal the 653 layer, one must either update the file, then the modified date, or vice-versa. An inopportune partition restart or power cycle would cause one or the other to be mismatched.
Also, each timestamp is a 64-bit number. CFFS has 2 32-bit user defined values that could be used to hold the last-update time. Creation time would need another home such as embedded in the file itself, or in a hidden file of metadata. Again, journaling, and keeping this metadata consistent and not corrupt would be tricky. Suggest only storing last modified date/time, and passing creation time to the client or always marking creation times flag as not-available. JKelley@ddci.com No time stamping. |
| 14 | Error Checking | There are lots of possible error checks, valid filenames being one of them. | It has always been the philosophy of the CFFS to NOT protect a client from itself. The debug version of the CFFS was the place for extra checks and error reporting to assist the client. As long as partitioning is maintained, a client is allowed to corrupt themselves if the rules are not followed.
Do we want to break from this philosophy for the 653 API to try to completely meet the spec, or stick with our existing strategy? If there was a test application for instance that intentionally passed an invalid filename, and we didn't return that error code, we'd fail the 653 spec test. Rfrost@ddci.com The 653 runtime implements all checks that are described in the standard. We do have an expanded set of error statuses so you can call GET_LAST_ERROR and get a unique error status rather than just INVALID_CONFIG or INVALID_PARAM. The debug version will write out an error message as well. There is a conformance suite described in Part 3. I don't know the details on getting it, but I think GMV in Portugal is the company that developed it and is making Part 3 updates as new supplements come out. Bill Cronk: we will need to comply with the spec. JKelley@ddci.com Comply with spec at the 653 API layer noting that space partitioning is NOT protected if one goes outside the 653 API libraries. |
| 15 | Case Preserving | It is implementation dependent whether a file system is case preserving or not. | The CFFS is case preserving. When using names returned by a file system service, the application developer should consider the names returned as case preserving (i.e., may need to be converted to all uppercase or lowercase before using the names in a comparison). |
Limitations Short List
This section contains the items from the #Potential Limitations Discussion that engineering agreed to be reasonable limitations. Looking for Management, and Customer buy-in.
| Item | Issue | A653 Spec | CFFS Limitation or Work Around |
| 1, 3 above | Directory Levels | Minimum 30 levels, max 512 character fully qualified file names. Directory and file names up to 64 characters each. | Version 1.3.0 along with CFFS 7.0.0 removes this limitation.
CFFS Limits fully qualified path names to 128 characters, and partition names to 128 characters. 653 limits volume and directory names to 64 characters. Therefore, the max fully qualified volume/pathname the CFFS supports is 64 + 128 = 192 characters. Said another way, the max path/filename excluding the volume/alias name is 128 characters. |
| 2, 6 above | Init | File system should be available when a 653 Partition starts. |
Document that the first file system API call may take several periods to complete as it waits for the CFFS Alive counter. This method is recommended instead of at library load time, as it gives the server (and partitions) time to start while the 653 partition make progress to Normal Mode. Ideally, the server is up and running before the first file system call anyway. Also, many of the file system API calls will require one or more periods to complete as the server needs time to complete the operation. |
| 3, 7 above | Access Rights | A single 653 Partition can have write access to a Volume, others can have read access if configured. |
Decision is to enforce read protection in the 652 API as defined in the xml. This does not prevent an application from using the CFFS API to go directly after a file, in other words, this read protection is not space partitioned, and is in place only to meet the 653 spec. Going outside the 653 API is not recommended. |
| 4, 14 above | Error Checking | There are lots of possible error checks, valid filenames being one of them. |
The 653 API will comply with spec at the 653 API layer noting that space partitioning is NOT protected if one goes outside the 653 API libraries. |
| 5, 15 above | Case Preserving | It is implementation dependent whether a file system is case preserving or not. | The CFFS is case preserving. When using names returned by a file system service, the application developer should consider the names returned as case preserving (i.e., may need to be converted to all uppercase or lowercase before using the names in a comparison). |
| 6, new | Cold Start | Conflict in spec, says file system access during Cold Start is Possible, but since Lock is Preemted during cold start, and all file system APIs return an error if lock is preempted, access during cold start is not technically feasable. | While file system access is available during cold start, an error will always be returned since lock is preempted.
Rfrost@ddci.comThe committee knows about this and changed the psuedo code from "preemption is disabled" to "current process has preemption locked and operating mode is NORMAL". When this comes out in Part 2 Supplement 3 it will also have multicore and mutexes so at that point it will state "current process owns a mutex and operating mode is NORMAL". The intent in our design should be to allow file system during initialization. |
| 7, new | RENAME_DIRECTORY | Newer version of P2 spec includes this API. | Since the CFFS is flat (no directories) creativity and work
arounds were needed to mimick that feature. Unfortunately, allowing a client to rename a directory, even changing its nesting level will require multiple non-atomic operations to complete. Still analyzing impact, but for now suggest, we don't support directory rename at all, or at least limit it to the same nesting level which is still problematic as it requires a hashing mechanism for the real pathname, and hidden files to unwind the pathname each open. For instance a 10 level deep path name would require around 10 cffs seeks to determine the actual path name. |
Part 2 Logbook System
XML Configuration Discussion
653 Config tool needs to allocate logbook quotas for feature provider on a partition by partition basis.
Need to define formula for sizing memory quotas needed for logbooks.
653 config tool will add allocations for the logbook memory required per partition.
653 config tool will add quota for an engraver process per partition if there are logbooks in that partition.
653 config tool will provide the PRIORITY_TYPE to use for the engraver process, the the PRIORITY_TYPE is set by the users in the 653 xml file.
Binary File Format
define MAX_LOGBOOK_NAME_SIZE (32)
// Current !!!!!!!!!!!!!!!!!!!!!!!!!!!
typedef struct {
uint32_t numberOfPartitions; uint32_t offsetToListOfPartitions; // points to list of A653P2_PartitionHeaderType uint32_t numberOfVolumes; uint32_t offsetToListOfVolumes; uint32_t numberOfLogbooks; uint32_t offsetToListOfLogbooks;
} LB_A653P2_HeaderType;
typedef struct {
uint32_t partitionHashedName; // Sorted, ascending. This is also the deos process instance hashed name uint32_t maxOpenFiles; uint32_t numberOfVolumeACLs; uint32_t offsetOfVolumeACLs; uint32_t offsetToCARName;
} A653P2_PartitionHeaderType;
typedef struct A653P2_Logbook_TAG {
char Logbook_Name[MAX_LOGBOOK_NAME_SIZE]; uint32_t Max_Message_Size; uint32_t Max_Nb_Logged_Messages; uint32_t Max_Nb_In_Progress_Messages; uint32_t partitionHashedName; /* hashed 32-bit number */ uint32_t offsetToCARName; uint32_t offsetToCFFSPartitionName;
} A653P2_LogbookType;
// Proposed !!!!!!!!!!!!!!!!!!!!!!!!!!!
typedef struct {
uint32_t numberOfPartitions; uint32_t offsetToListOfPartitions; // points to list of A653P2_PartitionHeaderType uint32_t numberOfVolumes; uint32_t offsetToListOfVolumes; // Note, removed numLogbooks, offset to logbooks from this structure, moved to A653P2_PartitionHeaderType.
} A653P2_HeaderType;
typedef struct
{
uint32_t partitionHashedName; // Sorted, ascending. This is also the deos process instance hashed name uint32_t maxOpenFiles; uint32_t numberOfVolumeACLs; uint32_t offsetOfVolumeACLs; uint32_t offsetToCARName; uint32_t numberOfLogbooks; // for This partition only!!! uint32_t offsetToListOfLogbooks; // List of A653P2_LogbookType for only this partition. uint32_t engraverProcessPriority; uint32_t maxRAMBytesLogbooks; // Does this help having this here?
} A653P2_PartitionHeaderType;
typedef struct A653P2_Logbook_TAG
{
char Logbook_Name[MAX_LOGBOOK_NAME_SIZE]; uint32_t Max_Message_Size; uint32_t Max_Nb_Logged_Messages; uint32_t Max_Nb_In_Progress_Messages; uint32_t offsetToLogbookCARName; uint32_t offsetToLogbookCFFSPartitionName;
} A653P2_LogbookType;