FACE Project

From DDCIDeos
Jump to navigationJump to search

As per Ole's email of 2016/2/12, this project is on hold.

See also Deos_RTEMS.

Description

This project is to develop a FACE compliant interface to run on Deos. Adding FACE to Deos will allow Deos to be used in defense avionics which may need eventual certification, and protect the market size for commercial avionics which is expected to support more multi-use applications for defense as well. The addition of FACE primarily involves the use of the ARINC 653 and POSIX APIs by user applications.

Work packet notes

  1. Limit to a few hours
  2. Create PCRs for components
  3. assign component to the "Deos component" column in the spreadsheet.
  4. Add PCR reference to the "TBD" column in the spreadsheet for each api.
  5. PCRs should have blocks/depends defined when dependency is in same Bugzilla.

Documentation

The following contains links to different documents from the standard and our spreadsheet categorizing which component API's belong to. FACE Doc Index

You can see the POSIX definitions at http://pubs.opengroup.org/onlinepubs/9699919799/functions/contents.html

Also, if you want a local copy, you can register at the open group and download a copy from http://pubs.opengroup.org/onlinepubs/9699919799/nframe.html

FACE Profiles

There are 4 profiles that are part of the FACE standard. Each builds on the previous. DDC-I would be targeting Safety Base.

  • Security
  • Safety Base
  • Safety Extended
  • General Purpose

POSIX API's

The safety base profile has 241 API's. DDC-I already provides 74 of these in ANSI, MATH, SAL.

  • 7 missing from Ansi. PCR:7927
  • 8 missing from Math. PCR:7928
  • 2 missing for memory (possibly goes in Ansi). PCR:8658
  • 43 missing for file system.
  • 68 missing for pthreads.
  • 6 missing from Sal. PCR:7929
  • 2 missing for shared memory (part of what component?) PCR:8659
  • 14 missing for signals.
  • 17 missing for time/timers/clocks.

See FACE POSIX API spreadshet for breakdown of API per FACE profile.

Concerns when designing within new Deos kernel paradigm

653 Questions

  • How is 653 timeout used in waitForEvent calls?
  • How is queuing port queue managed. Can manage in lib like previously, but there are impacts to window activation time, and issues if POSIX apps want to use API.
  • How is deadline management handled?
  • Does priority scheduler do anything with 653 periods? Assume library needs to be involved in release point management.
  • Need to ensure window activation logic pays attention to scheduler critical. Even if using high priority to signify this, we may be in the middle of updating data structures and not want state of ready threads to change. Current 653 sets flag and delays window activation until release preemption.

ISSUE: need to resolve HeartOS code use and which IP repository the POSIX library will reside.

POSIX questions

  • Each Posix scheduler would need a SMO or platform resource for its auxiliary data structures that wrap various kernel objects. Unlike 653 where the data is per partition, POSIX partitions need to share knowledge between a set of partitions.
  • A valid PID is assumed to be positive, ref kill()


  • FIRM DECISION: Posix threads, 653 processes, and RMA threads will all use kernel threads.
    • Example attributes:
      • time enforcing
      • can support asynchronous timers
  • DECISION: Realtime, monotonic Clocks, timers, and timeouts will be in the kernel.
    • RMA schedulers are not (initially) required to support timer signals.
    • At this point we think we can support clocks and timeouts for RMA.

Header files:

  • If a type is defined in a header file that comes from kernel, the header probably should not have the POSIX name unless we're sure the functionality will always be in the kernel.
    • E.g., kernel time types and functions may not be POSIX names.

Permissions

UGO permissions apply to semaphores, shared memory, and message queues. ISSUE: Should we be switching names to UGO?

  • probably not.
  • An option for implementing permitted binder is a more generic directory services.

Clocks/Timers

APIs will be implemented in kernel and ANSI. Options:

  1. Add realtime timeouts to the kernel APIs
    1. Both realtime and monotonic clocks in kernel
      1. Setting delta is in kernel
  2. Put monotonic clock timer in kernel
    1. Need some mechanism for handling realtime "delta" adjustment.
  3. Have library with high priority thread that manages timeouts for the threads in the process.
    1. Some hardware has a timer that can raise a time interrupt
    2. Kernel needs way to have that interrupt delivered to different threads at various points in time (i.e., different windows).
    3. The "interrupt receiving thread" must be able to raise an exception to the list of threads waiting.
      1. Those threads may be in different processes.
    4. Kernel would still need to support some sort of timer delivery mechanism
  • For set time, suggestion is to have a PRL to handle access control that calls a kernel provided PPI.
  • How are clock timeouts used in waitForEvent|Semaphore calls?
  • How do we create timers based on clocks -
    • realtime (allowed to set (if access)),
      • Need some sort of access control:
    • monotonic (since startup),
    • Are waits based on an interrupt, or via polling (window/period or schedule point)?
      • Most functionality in POSIX library, only need kernel or PAL to track system tick rollover count
      • Does kernel handle absolute time references (in wait calls)?
        • No: Assume library somehow handles this
          • Kernel will need: ???
      • PAL or kernel keep track of rollover of system ticks
    • process CPU time (probably not required since clock_getcpuclockid is not part of safety or safety extended),
    • thread CPU time (presumably required since pthread_getcpuclockid is part of safety base and extended).
  • Allowed to set time on CPU time clocks, but these do not impact current scheduling (case of SPORADIC scheduler, which I don't think is required for FACE)
  • How do those timers get triggered and signal process/thread
  • How do we all set time of real time clock - shared by all processes. Allowed to require access control for that operation

See work packets DDCI_PCR:3203 and DDCI_PCR:3204.

Message Queues

  • Need ability for mq_send/mq_receive to block using absolute timeout of realtime clock
  • With priority scheduling, highest priority longest waiting thread is the one that unblocks
  • Need ability to generate sigevent signal when message arrives in queue (that doesn't fulfill blocked receive call)
  • Message is inserted in queue at a priority, rather than just FIFO
  • NONBLOCK is attribute set dynamically on queue, but we could probably handle that at POSIX lib API

For safety base profile, message queues are intra-process only, so no issue of access control, shared namespace, or lifetime. Implementation guidance is that POSIX inter-process communication should use sockets rather than message queues.

Might be able to use 653 buffers as a starting point. GK/RLF

  • Potential benefit to both POSIX and 653.
  • ISSUE: Is restriction of message queues to intra-process a problem for market acceptance?
    • Restriction is called out as only applying to safety base.

See work packet DDCI_PCR:3194.

PThreads

  • Attributes object handled within API Lib
  • However, pthread_attr_setstack need to do some validation at API level before it ends up being used in createThreadEx call. Stack readable/writable, aligned, > PTHREAD_STACK_MIN (is that compile time in header, or queried from kernel)
  • There is POSIX option to set the user stack address. Initial Deos POSIX will behave as it does now where thread creation cannot specify a user stack address, but only size. createThread[Ex] will automatically allocate stack from the 4MB user stack memory area.
    • pthread_attr_setstack() can specify stack addr & size
    • pthread_attr_setstacksize() only specifies size
    • pthread_attr_setstackaddr() only specifies addr, this is not part of safety base/addr.
  • Need createThreadEx to take an attributes object. createThread can use thread template to fill this out and call createThreadEx.
  • Need getThreadAttributes or getPriority API(s).
  • Need getSchedulerInfo to retrieve min/max priority. Maybe other attributes for testing.
    • Deos will support one set of priorities defined in registry and RMA schedulers will limit scheduler based priority lists to #periods+#ISR priorities.
    • Since system based, this may be getSystemInfoDEOS extension, new API, or each scheduler could return same values in getSchedulerInfo
  • ISSUE: do we support SCHED_RR?
    • Who manages monitoring the interval? Easiest is kernel if it has timer support. SCHED_FIFO and SCHED_RR may be in the same scheduler so FIFO policy basically masks the time slice timer. Window change should save/restore timer value.
    • Since sched_rr_get_interval is not available until safety extended initial position is SCHED_FIFO only for initial safety base offering.
  • pthread_setconcurrency - not required to do anything but return value set to.
    • This is intended for process level scheduling to multiplex on system level scheduling, which Deos will not do.
    • Functions are marked obsolete
  • ISSUE: How do we support multi-core
    • FACE Safety Base does not have any API's which would indicate which core a thread is to be executed.
    • POSIX lib will need to create scheduler name that matches the POSIX domain this process belongs to and some type of BMP core affinity.
    • Linux sched_setaffinity assigns a set of CPUs to a process and I believe does SMP when the set contains more than 1
    • Linux and some pthreads implementations have pthread_setaffinity_np to set a threads affinity to a set of CPU's. This is done after the thread is created. The thread inherits the calling threads affinity to start.
    • Linux and some pthreads implementations have pthread_attr_setaffinity_np to set a threads affinity to a set of CPUs in the attribute passed to pthread_create. This seems like the best option and limit cpusetsize to 1 core.
    • Deos will not support SMP or an explicit runtime migration of a thread to a different scheduler (at least initially)
  • Thread Local Storage
    • Perhaps we should support GCC TLS to make porting threading libraries easier, and to help our users.
    • POSIX has thread specific data destructors. However, pthread_exit is not supported in safety base which is where the destructors are called.
    • ISSUE: If we support thread termination, should we support pthread_exit and who manages TLS destructors?
  • POSIX APIs are required to be thread safe, other than those listed in http://pubs.opengroup.org/onlinepubs/009695399/toc.htm, we'll need to make sure all the other functions are thread safe.
  • Safety extended only: Cancellation and cancellation points (large number of functions).
    • Global assertion of non-cancelable functions. Would probably require some sort of checklist or policy.
  • multi-thread safe initialization mechanism (pthread_once).

pthread_setschedprio()

  • Kernel will have to support setThreadPriority()
    • setThreadPriority() not supported for RMA schedulers.
    • setThreadPriority(): if the thread is waiting on a resource, the resource wait is not affected (ref POSIX sched policy SCHED_FIFO), but scheduler priority is affected
      • Best solution we have at this time is that the kernel scheduler lists will have to be searched to see if the thread is ready or waiting.

ISSUE: Does ARINC 653 have same "no change lists" behavior as POSIX?

  • 653 moves thread to end of list, POSIX depends on direction of priority change.

how does (multi-core) POSIX library implement atomic operations, like pthread_setschedprio()?

  • library has to determine whether thread has a mutex locked to decide whether to call setThreadPriority(), in the mean time mutex lock status could change.
    • Using high priority for the POSIX library scheduler is insufficient in multi-core.
    • using a "scheduler mutex" causes non-AIB like effects

proposal have kernel support "base" and "current" thread priority.

  • some API to "set to base priority"
  • API(s) for setting base and current priorities.

ISSUE: How do POSIX and RMA priorities relate

Proposal:

  • Have registry specify max number of priorities for system.

See work packets DDCI_PCR:3199, DDCI_PCR:3200, DDCI_PCR:3201, DDCI_PCR:3202.

Condition Variables

  • Use Deos semaphore with count of 1 to control synchronization.
  • Need absolute timeout, which clock is attribute of condition variable, but cpu time clocks will fail.
  • pthread_cond_signal does signalSem to release 1. If no short duration waits or slack threads in this scheduler, this is guaranteed to be priority and then FIFO?
  • pthread_cond_broadcast does clear and reset of semaphore to release all

See work packet DDCI_PCR:3230

Mutex

  • Use setThreadPriority and semaphore with count of 1
  • Options to not do anything with priority. Only change priority if higher thread is blocked. Or always raise priority to ceiling.
  • accesss control via writeability of mutex object (cross process)

Issues (same apply to mutex, condition variables)

  • access control via writability to shared memory containing object.
    • pthread_shared Not needed for safety base or extended.

Proposal user space mutex and condition variables:

  • Add kernel API to change thread priority.
  • May need to have kernel semaphore or mutex queuing model (FIFO vs priority).
  • Library must handle deleting threads.
  • Have a good chance to use SCORE or HeartOS for source.
  • For timeouts can wait on a resource that will never become available and use raiseExceptionToThread() to ready the next thread.

See work packet DDCI_PCR:3231

Semaphores

  • Map to Deos semaphore
  • Need absolute timeout
  • Need way to get current count value of semaphore

DECISION: kernel will add UGO to semaphores, dependent on global "use UGO flag" Weasel DECISION: and all inter-process visible KIOs

Issues semaphores

  • access control
    • grant vs UGO
  • namespace
  • lifetime
  • queuing model? FIFO within priority.

Permit semaphores to be "system level".

  • Global namespace
    • permitted binder
  • Auto created
    • various initial values and UGO settings
  • Named semaphore lifetime is as with SMO.
  • ???? Unnamed semaphore deletion is not understood.
    • Behavior of unnamed and pshared and how the system determines when the semaphore can be deleted is unknown.
      • Since pshared is not supported for mutex in safety base/extended, we'll postulate that pshared is not supported for semaphores either.
      • Deos POSIX will not support pshared semaphores.

TODO: create work packet.

AL: Add statement to UG about globally unique process handles.

High level options

  1. Implement in kernel
  2. Add POSIX process that would own all the semaphores
    • perhaps using a "sudo" like API call... RLR
  3. {need, require, utilize} POSIX file system
  4. Invent some new primitive, usable by user space, that could be used to implement existing thread coordination mechanims.
  • Do we support named semaphores?
    • How are semaphores named? e.g., /sem/process_name/semaphore_name
      • Linux requires sem name to not contain "/".
      • POSIX spec says existence of "/" is implementation defined.
    • What happens when the process that created a named semaphore, is deleted.
      • POSIX says semaphore outlives creating process.
      • However, sem_unlink() is only in safety extended.
    • How does POSIX restrict creating specific named semaphores?
    • How does UGO access fit into semphores?

TODO:[JN] create work packet.

Memory Objects

  • No issues, but when file system also supported mmap can get file descriptor for file, shared memory object, or typed memory object (not sure this one)

Proposal:

  • PCR:10175 Add UGO access control to kernel
    • enabled via switch per system, perhaps per object.
    • grant continues to be supported.
    • will need API's to populate stat() structure.
  • PCR:10173 Add attachMOAtAddr()
    • If re-attaching at the same addr, can mapped pages be temporarily "not-present"?
      • Hope no.
    • Can use attachMO to increase SMO size.
    • Currently no way to decrease size of a SMO.
      • Just fine for initial release.
  • namespace POSIX semantics match Deos.
  • lifetime of shmem objects POSIX semantics match Deos.
  • DDCI_PCR:3198 User API library
    • shm_open() returns a file handle.
      • Other than close(), ftruncate(), and mmap(), what else can be done with the file handle?
      • mmap() is only applicable to shared memory.
      • stat() must get various UGO related information.
      • ISSUE: Should file operations be integrated/harmonized with IOI?
      • read() and write() to shmem not defined.
        • ISSUE: Is restriction a problem for market acceptance?
    • Development of user library has minimal dependence on kernel support for UGO.

Signals

  • Thread needs signal mask
  • signal is generated to either a thread or the process
    • What thread to raise exception(signal) to.
    • Permissions
      • Will use process UGO
    • Signals are queued. Queue depth appears to be 1 for all but real time signals.
  • Don't see anything about mapping interrupts (other than timer interrupts) to signals.
  • Hardware faults (Deos exceptions) and maybe some Posix interpretted user defined exceptions will be translated to signals.
  • Use thread/process exception handler with Library defined handler that can then interpret the stored signal information.
  • volatile sig_atomic_t

limits.h in POSIX spec requires

  • POSIX_RTSIG_MAX must be at least 8.
  • SIGQUEUE_MAX must be at least 32.

At this point in time, we think most, if not all, of this goes into kernel. Work packet: PCR:10180

File System

  • Did not look at putting Posix/ARINC 653 Part 2 API on top of CFFS. Assume that does not have impact on kernel/Posix Lib with exception of possibly mmap needing to know both types of descriptors.

TODO:[BC]create work packet.

See also: File_System_Project.

Must support

  1. Volumns, directories, removal media,
    • existing DDCI ARINC 653 part 2 file system:
      • Simulates directories on top of CFFS
      • Doesn't atomically support directory renames
  2. Atomically updated data blocks
  3. file operations must be applicable to shmem and "regular files".
  4. umask and access control

work packet: Above "must support list" must be confirmed and augmented.

work packet: investigate existing file systems

  • GPL/BSD
  • Commercial?
  • DDCI ARINC 653 part 2 on top of CFFS

Can ignore:

 - Putting executable files in the POSIX file system.
 - Putting kernel, PAL, or registry in the POSIX file system.

Proposal simple vfile layer supporting minimal set of file systems:

  • APIs must support: shmem, sockets, "our writable file system", kernel file system
  • Since we're staying "level E", can use GPL or other file system.

Existing CFFS doesn't support directory renames.

653 Pseudo Code Analysis

Partition API

GET_PARTITION_STATUS
// values from 653 config file, plus lock level, operating mode, start condition, all managed by 653 lib

SET_PARTITION_MODE
// Idle
   // deleteProcess

// ColdStart
   // restartProcess
   
// WarmStart
   // wake up initial thread
   // delete all 653 objects, Deos threads, Deos events
   // do restartentry stuff to run main
   // set flags to know start condition, operating mode
   // restart main thread

// Normal
   // start all 653 processes
   // aperiodic - make them ready now
   // periodic - calculate the next release point (window with periodic processing start)
   // calculate deadlines and if out of range raise exception for HM
   // wait on an event until warm start needs to wake us up.
   // Note this may mean there are no threads in the scheduler ready until periodic window starts
   

Process API

CREATE_PROCESS
  // error checks
  // createEvent to use for process synchronization and suspension
  // createThreadEx
     // process is periodic if it has a period
     // process has deadline if it has a time_capacity (Wall time, can be replenished with API)
     
  // create 653 object and return
  // set entry point to a library routine preamble that can run to the suspension event
  
SET_PRIORITY
  // find thread
  // error checks
  // waiting resource queues do not have to be reordered
  // setThreadPriority (must move to end for requested priority, so this can be a yield)
  
SUSPEND_SELF
// error checks
// if no timeout, return done
// mark us as suspended in 653 lib
// waitForEvent (suspensionEvent, timeout)  (Validate timeout plus how do we pass 64 bit relative (or converted to absolute) timeout?)
// when return timed out or someone resumed us
// clear suspended flag

SUSPEND
// error checks
// mark thread as suspended in 653 lib
// raiseException to the thread and have it waitForEvent (suspensionEvent, waitIndefinite)
// when return someone resumed us, clear suspended flag

Another option would be to set the priority to idle-1.

RESUME
// error checks
// clear suspended flag for thread
// if process was suspended while waiting on a resource (blackboard, event, buffer, semaphore, etc), then return. Leave thread waiting on suspensionEvent as part of resource wait
// else pulseEvent (suspensionEvent)

STOP_SELF
// error checks
// if error process, reset HM error queue
// restartThread - run 653 preamble so it is at a suspension event waiting for start

STOP
// error checks
// cancel any pending resource waits
// if error process current and stopping the preempted process, ensure user level preemption lock is reset to 0
// restart thread so it gets back to waiting at suspension event for start

START
// error checks
// if aperiodic
   // if in NORMAL mode
      // setup deadline based on current time + time capacity    (Is this something priority scheduler will track?)
      // pulse suspension event so it starts
   // if in startup
      // stays waiting on event until Normal mode
// else if periodic
   // if in NORMAL mode
      // Need to calculate next release point. Assume this is something 653 lib needs to do and is not part of priority scheduler. Also deadline can be calculated here
      // Some future window activation needs to check if release point and pulse suspension event

DELAYED_START
// NOT IMPLEMENTED. Not part of Honeywell SOW. May be able to calculate which future window would be release point.

LOCK_PREEMPTION
// error checks
// setThreadPriority(+1 of user range)
  // Note this is inadequate for multi-core.

UNLOCK_PREEMPTION
// error checks
// nesting logic
// setThreadPriority (restored), may schedule higher priority pending during that time. 
// Spec does not indicate that this has to be a yield, but it does not preclude that.

GET_MY_ID
// some error checks for initial/error process
// return 653 handle

GET_PROCESS_ID
// error checks
// return 653 handle for named process

GET_PROCESS_STATUS
// error checks
// copy static info
// If current thread, state = RUNNING. Else runtime should know if WAITING, READY, or DORMANT. Should we ask somehow to know if WAITING vs READY not by 653 API. ie. User calls Deos API directly for some resource.

Time API

TIMED_WAIT
// if DELAY = 0, setThreadPriority(same) causing yield
// waitForEvent (threadSuspensionEvent, timeout) (How is timeout validated for range and passed through?)

PERIODIC_WAIT
// error checks
// calculate next release point as current release point + period
// calculate new deadline time as next release point + time capacity (who is responsible for monitoring and reporting this)
// waitForEvent (threadSuspensionEvent) timeout used for release point, or preferably waitIndefinite there but windowActivation pulses. (This may be able to use startOfPeriod events and Deos long duration syntax)

GET_TIME
// calls PAL getElapsedTimeInNs

REPLENISH
// error checks - periodic process cannot exceed next release point
// update deadline time currently in effect  (how does 653 request this of the scheduler if that can use thread timer as current 653 process deadline monitor?)

WINDOW_WAIT - DDC-I extension
// allows better continuation slice usage, and aperiodic process to wait for known event rather than timed wait
// waitForEvent (threadSuspensionEvent)   - window activation must release all of these every activation but this approach 653 lib must pulse multiple events rather than a O(1) mergeList.

Blackboard API

CREATE_BLACKBOARD
{
   // Error checks
   // createEvent to use for synchronizing
   // create blackboard 653 data structure
}

DISPLAY_BLACKBOARD
{
   // find blackboard
   // error checks
   // enter scheduler critical
   // copy data in and set occupied flag (should copy be in critical, or do we need another sync mechanism)
   // pulseEvent (Problem here is that if another thread suspended one of the waiters, they should also be waiting on their thread suspension event and should not wake up. Can we have thread waiting on multiple resources? If not, then we would need to use suspension event for this and loop pulsing them all.
   // leave scheduler critical
}

READ_BLACKBOARD
{
   // find blackboard
   // error checks
   // If blackboard has data, copy it.  Do we need to protect data not changing during copy?
   // Else if timeout = 0, return not available status
   // Else if preemption disabled by user or error process, return invalid mode
   // else if timeout would overflow, return error   (How do we know this now? Currently we convert to absolute ns time, compare with PAL max supported time
   // else (wait indefinite or valid timeout) waitForEvent   (How do we pass timeout, and can that wait know about the overflow and return status?)
   // If times out, then return timeout status
   // Else copy the buffer. Again do we need to protect data not changing during the copy?
}

CLEAR_BLACKBOARD
{
   // find blackboard
   // error checks
   // set flag blackboard is empty
}

GET_BLACKBOARD_ID
{
   // find blackboard, return handle
}

GET_BLACKBOARD_STATUS
{
   // find blackboard
   // error checks
   // get static max msg size, and dynamic empty indicator
   // TBD: Need to know how many threads waiting on the event. Does API Library need to keep track of that? 653 no processes are deleted, except on warm start so we could keep parallel consistent list with all calls to waitForEvent, pulseEvent.
}

Buffer API

CREATE_BUFFER
{
   // Error checks
   // create buffer 653 data structure, store queue discipline
   // No separate sync event. Uses a suspension event created as part of CREATE_PROCESS
}

SEND_BUFFER
{
   // find buffer
   // error checks
   // enter scheduler critical
   // If not full
      // If has waiting process, copy data to waiting threads stored address, pulseEvent (653 process suspension event, if 653 process not suspended)
      // Else, copy data to buffer.
   // Else if full
      // If no wait, return not available
      // if preemption user locked or error process, return error
      // Insert in 653 managed queue by discipline
      // store address allow read to copy our data without context switch to us.
      // waitForEvent (653 process suspension event) with timeout (who validates timeout)
      // when return either timed out or data was copied
}

RECEIVE_BUFFER
{
   // find buffer
   // error checks
   // If buffer has data
      // read first message.
      // If waiting sender, copy data into freed message, pulseEvent if thread not 653 suspended
   // Else if timeout = 0, return not available status
   // Else if preemption disabled by user or error process, return invalid mode
   // else (wait indefinite or valid timeout) waitForEvent (thread suspension event)
   // If times out, then return timeout status
   // Else buffer was copied into our pointer
}

GET_BUFFER_ID
{
   // find buffer, return handle
}

GET_BUFFER_STATUS
{
   // find buffer
   // error checks
   // get static and dynamic properties
   // Get num waiting (either empty or full)
}

Event API

Similar to Blackboard API of use Deos Event for thread blocking.

Might be able to map directly to Deos APIs.

Semaphore API

Similar to Buffer API use of Deos Event for thread blocking.

Might be able to map directly to Deos APIs.

Sampling Port API

Library Initialization
// ioi_init

Formatting functions
// knows data layout and does timestamp marking/freshness checking

CREATE_SAMPLING_PORT
// error checks
// ioi_open

WRITE_SAMPLING_MESSAGE
// error checks
// ioi_write

READ_SAMPLING_MESSAGE
// error checks
// ioi_read

GET_SAMPLING_PORT_ID
// error checks and find sampling port

GET_SAMPLING_PORT_STATUS
// find port
// get static data and last msg valid

Queuing Port API

Notes:
Queuing Ports
- Because POSIX apps should be able to use 653 API for sampling/queueing ports we want that to be in separate lib, or allow POSIX to bring in full 653 lib
- Sampling ports are no problem
- Queueing ports have issue with blocking
- If need to block, there is a queuing discipline which IOI does not support, so IOI or lib must deal with it.
- Problem is once port is available (not empty or full based on direction) notifying the next thread from queue to read/write the next message
- If multiple writers from different processes/windows we cannot guarantee that thread will be next to run, if we wanted to support freeing multiple spots in the port.


CREATE_QUEUING_PORT
// error checks
// save queue discipline
// ioi_open

SEND_QUEUING_MESSAGE
// error checks
// if no waiting processes
//   attempt ioi_write
//   if successful return status, else fall through
// if no timeout, return not available
// if user preemption or error process return invalid mode

(How does this work in FACE POSIX? 
   1) We don't want window activation to be this complex.
   2) 653 lib cannot assume current thread is 653 process with stored pointer to write from.
   3) If not 653 process, do we force POSIX pthread_create to create this object. Once this is a separate library, should RMA allow to call or that still does the IOI manually and does not play in queue discipline?
   
// Current implementation
   // waitForEvent (threadSuspensionEvent, timeout)
   // At window activation, scheduler sees if any pending IOI writes and it attempts them itself, since it knows where the 653 calling process has the data and if successful releases 653 process
   
// New implementation
   // If FIFO queue discipline, setThreadPriority (user max + 1) so it will run FIFO order to read the port
   // waitForEvent (threadSuspensionEvent, timeout)
   // At process switch (for POSIX may be within same window) if thread is waiting for ioi pulse threadSuspensionEvent. (We currently don't have process switch notification, only window activation)
   // Thread wake up check timeout expired and if not do ioi_write. If not successful loop waiting for event with reduced timeout.
   // Restore priority if necessary, which may schedule away


// Preferred new implementation
   // If FIFO queue discipline, setThreadPriority (user max + 1) so it will run FIFO order to read the port
   // ioi_timedwrite - ioi deals with keeping list of threads to block waitForEvent with timeout, and ioi_read will pulseEvent for the next in queue
   // restore priority if we switched it to use FIFO rather than priority
   
   // The problem with this approach is if we have multiple writers from different windows, ioi would have to wait for the ioi_write to come from the pulsed thread, before it could have ioi_read signal any others.
   // Therefore, it would need to know if it released any writes which have not happened, then the ioi_write would trigger the next release for an ioi_write.
   // That could cause synchronization issues if a thread is pulsed but then never makes its write
   


Face to face meetings

August 2013, Alexandria, VA

Richard attended RIG/General Enhancements Breakouts. Presented DDC-I Multi-core to joint TWG breakout. Conformance test suite for FACE 1.0 was approved. FACE 2.1 is in steering committee review, so harder to make changes FACE 3.0 is starting draft stage.

Multi-core

Committee wants to have OS vendors agree on set of POSIX API's to propose to OpenGroup RTES forum. (GHS has already proposed. Windriver was present. LynuxWorks represented via consultant that works for RTI). If believed to be a solid draft, then include in FACE 3.0.

Proposed API's are:

rewind_scheduling_allocation_domain
reset process specific variable for determining cores available to process
get_scheduling_allocation_domain
get the next core available to this process, or error at end of list
pthread_attr_get_core_affinity
reads the core affinity from the pthread_attr object. default is SMP - no specific affinity
pthread_attr_set_core_affinity
sets the core affinity in the pthread_attr object
pthread_get_core_affinity
reads the current core affinity for the thread
pthread_set_core_affinity
sets the core affinity for the thread (after creation)

FACE will also adopt what 653 committee does for multicore.

September 2013, Tucson, AZ

FACE 2.1 will add pthread_rwlocks and pthread_barrier services to the general purpose profile. FACE 2.1 will recommend not using API's such as strcat in preference for a safer version strncat. This is to align with OMS. These are mostly the n versions of string API's and _r versions of other ANSI APIs.