Kernel Project: Difference between revisions

From DDCIDeos
Jump to navigationJump to search
 
(45 intermediate revisions by 5 users not shown)
Line 9: Line 9:
experimental and verification baselines.  I.e., roadmap sorts of
experimental and verification baselines.  I.e., roadmap sorts of
topics.
topics.
Kernel releases prior to 7.6.x were made by Honeywell.  For a list see {{SERVER}}/scm/cert/deos-products/kernel/kernel/
{| border="1"
|- style="background: silver"
! Distribution || Kernel Version ||  features  || Branch
|-
| [[Kernel_762_cert_Project|Denali]] ||  7.6.2
|
| [{{SERVER}}/scm/Deos/products/kernel/kernel/branches/7.6.2 7.6.2]
|-
| [[Kernel 7.10 Project| Elbert]]      ||  7.10.6
| [[Deos_32%2Bbit_Physical_Address_Project|36-bit Support]]
| [{{SERVER}}/scm/Deos/products/kernel/kernel/branches/7.10.6 7.10.6]
|-
| [[Kernel_Fourpeaks_Project|Fourpeaks]]
|  8.3.1
| [{{SERVER}}/Wiki/Kernel_Project/Memory_Pools Memory Pools], [[Deos Cache Partitioning]], 653, [[Deos_MIPS_Port|MIPS]], [[POSIX_Access_Controls]]
| [{{SERVER}}/scm/Deos/products/kernel/kernel/branches/8.3.1 8.3.1]
|-
| [[Kernel_Greys_Project|Greys]]
| 8.4.2
| ARM
| [{{SERVER}}/scm/Deos/products/kernel/kernel/branches/8.4.2 8.4.2]
|-
| Handies      ||  8.5.0        || ||
|-
| [[Kernel_multicore|Indie]]
| 9.2.3
| Multicore for one core
|
|-
| [[Jupiter_kernel|Jupiter]]
|  10.8.4
| Multicore
|
|-
| [[Kismet_Kernel|Kismet]]
| 11.4.x
| [[Kernel_64|64-bit]]
|[{{SERVER}}/scm/Deos/products/kernel/kernel/branches/mainline mainline]
|}


Perhaps this would be better handled as a "product" page like we did on the legacy wiki.
Perhaps this would be better handled as a "product" page like we did on the legacy wiki.
Line 14: Line 56:
These are the current Kernel projects in work:
These are the current Kernel projects in work:


* [[Kernel 7.10 Project| Elbert]] (AKA 7.10)
== Laplata ==
** [[Deos_32%2Bbit_Physical_Address_Project|36-bit Support]] (Deos RAM and FLASH still limited to 32-bit).
 
** Part of Sybil
== Ideas ==
* Elbert++ (A baseline following ElbertUnverified release ~ 8/1/2012, Verified Q2/2013)
 
** E500 SPE support (kernel only, not math), (needed by malabi)
=== Driver Space ===
** Thread aware debugging. (needed by torta)
 
** Complete Sybil (desired by malabi)
This is more of an idea (thanks Ryan) than a design at this point.
* Fourpeaks-- Unverified release ~ 8/28/2012, Verified ??/2013
The name of this feature is still TBD. For now using Driver Space (DS)
** 653/WAT (torta)
and Driver Space Library (DSL).
* Fourpeaks  release schedule depends on Chino
 
** [[Deos_MIPS_Port|MIPS]] (works on VM now, need some devel to work on real HW)
The main idea is to construct a space partitioned area that executes in user
** [[Deos Cache Partitioning]]
mode that permits drivers to operate without the possibility of corruption from
** Might need fast packet as well.
"normal" user mode, but still operating in user mode so drivers can utilize
kernel services, e.g., scheduling, resource, etcDrivers are thus partitioned
from "normal" user mode code, and the kernel remains partitioned from drivers
(which permits the kernel to remain some semblance of micro).
 
==== The core ideas ====
# A range of addresses where DS libraries are loaded, at system startup, process creation, or both.
## The key is that DSL loading would finish and be "sealed" (kernel prevent modifications to the VAS mapping) before user mode libraries started to load.
# A pair of VASes, user-mode and DS that are isomorphic except that DS address range is user-mode read-only/none in the user-mode VAS.
#* Changes to the user mode VAS would be made consistently in the DS VAS.  This enables DSL to get direct r/w access to user mode via pointers.
#** DSL code would need to verify pointers were within user mode address range.
#** DSL code would need to catch access violations, e.g., not present pages, etc.
#* It is likely that DSL code will need to have some control over which memory regions are permitted to have user mode read or write access to DS address ranges:
#** Some ARM devices can cause serror if some devices are accessed via non-word sized reads/writes, i.e., use user space access is NONE vs read.
#** Some designs would benefit from having DS platform resources (e.g., DMA buffers) be directly readable/writable by user mode.
# A call/return would be made between user mode and DS mode via a trap mechanism managed by the kernel which swaps the VASes and stacks
#* Any kernel code that needed to distinguish between user space and DS could use the VAS as the mode indicator.
#* The ABI and how to register DSL entry points is TBD, but presumably similar to kernel system service traps.
# At present the state distinct between user space and DS is:
## stack
### The thought is the DS stack would be empty on return to user mode.
### The alternative would be some sort of co-routining, which would be helpful if DSL code needed to "call back" to user mode.
### The size of, and when the DS stacks are created, is TBD.
### The kernel would, obviously, need to know about both stacks, notably for overflow detection, statistics, etc.
## exception vectors.
### E.g., to handle access violations.
#### Should we enable linguistic exception handling?
### There might need to be some way to propagate exceptions from DS to user space.
## The object file record lists
## This state list is likely incomplete, but hopefully there are not a large number of differences.
 
==== Issues ====
# The terminology user-mode vs driver mode is poor.  The kernel still wants to treat driver and user space as "user mode", so perhaps some term other than "driver mode" is needed to distinguish when the VAS is the DS VAS?
# It would be nice to prevent user space code from executing in driver mode.  E.g., ensure that DSL code doesn't accidentally call a user mode function.
## Perhaps mark all user space VAS addresses as no execute in the DS VAS?
# PRLs provide a way to do hardware initialization at system startup so there is no race conditions. If driver space doesn't have a similar capability that will re-open the race condition.
# PRLs require a stack analysis to ensure they are within the kernel allowed stack size.  It would be nice if driver space could have a variable stack size so that normal watermarking would work.
# PRLs can respond to mode changes, would there be a way to enable driver space drivers to do the same?


== Roadmap ==
== Roadmap ==


These are the substantive changes that have been proposed for the
These are the substantive changes that have been proposed for the
kernel, and our current thinking of their release schedule.
kernel that are not yet in a project.
 
* Are there changes made by Honeywell that need to be brought forward?
* POSIX
* [[FACE_Project]]
* Multi-core ARINC 653
* Support for Ada (unsure if there are any kernel changes required).
* Make HAL a separately loaded .so (to speed processor porting).
 


* ARM support
* [https://deos.ddci.com/Wiki/Deos_multi-core_for_AGM3 Multicore]  (on legacy wiki)
* More than 32-bit Deos RAM and FLASH


[[Category:Projects]]
[[Category:Projects]]

Latest revision as of 17:46, 12 December 2024

The non-distribution specific Deos Kernel project.

Description

This is a placeholder for the work performed across multiple kernel experimental and verification baselines. I.e., roadmap sorts of topics.

Kernel releases prior to 7.6.x were made by Honeywell. For a list see https://ddci.zapto.org/scm/cert/deos-products/kernel/kernel/

Distribution Kernel Version features Branch
Denali 7.6.2 7.6.2
Elbert 7.10.6 36-bit Support 7.10.6
Fourpeaks 8.3.1 Memory Pools, Deos Cache Partitioning, 653, MIPS, POSIX_Access_Controls 8.3.1
Greys 8.4.2 ARM 8.4.2
Handies 8.5.0
Indie 9.2.3 Multicore for one core
Jupiter 10.8.4 Multicore
Kismet 11.4.x 64-bit mainline

Perhaps this would be better handled as a "product" page like we did on the legacy wiki.

These are the current Kernel projects in work:

Laplata

Ideas

Driver Space

This is more of an idea (thanks Ryan) than a design at this point. The name of this feature is still TBD. For now using Driver Space (DS) and Driver Space Library (DSL).

The main idea is to construct a space partitioned area that executes in user mode that permits drivers to operate without the possibility of corruption from "normal" user mode, but still operating in user mode so drivers can utilize kernel services, e.g., scheduling, resource, etc. Drivers are thus partitioned from "normal" user mode code, and the kernel remains partitioned from drivers (which permits the kernel to remain some semblance of micro).

The core ideas

  1. A range of addresses where DS libraries are loaded, at system startup, process creation, or both.
    1. The key is that DSL loading would finish and be "sealed" (kernel prevent modifications to the VAS mapping) before user mode libraries started to load.
  2. A pair of VASes, user-mode and DS that are isomorphic except that DS address range is user-mode read-only/none in the user-mode VAS.
    • Changes to the user mode VAS would be made consistently in the DS VAS. This enables DSL to get direct r/w access to user mode via pointers.
      • DSL code would need to verify pointers were within user mode address range.
      • DSL code would need to catch access violations, e.g., not present pages, etc.
    • It is likely that DSL code will need to have some control over which memory regions are permitted to have user mode read or write access to DS address ranges:
      • Some ARM devices can cause serror if some devices are accessed via non-word sized reads/writes, i.e., use user space access is NONE vs read.
      • Some designs would benefit from having DS platform resources (e.g., DMA buffers) be directly readable/writable by user mode.
  3. A call/return would be made between user mode and DS mode via a trap mechanism managed by the kernel which swaps the VASes and stacks
    • Any kernel code that needed to distinguish between user space and DS could use the VAS as the mode indicator.
    • The ABI and how to register DSL entry points is TBD, but presumably similar to kernel system service traps.
  4. At present the state distinct between user space and DS is:
    1. stack
      1. The thought is the DS stack would be empty on return to user mode.
      2. The alternative would be some sort of co-routining, which would be helpful if DSL code needed to "call back" to user mode.
      3. The size of, and when the DS stacks are created, is TBD.
      4. The kernel would, obviously, need to know about both stacks, notably for overflow detection, statistics, etc.
    2. exception vectors.
      1. E.g., to handle access violations.
        1. Should we enable linguistic exception handling?
      2. There might need to be some way to propagate exceptions from DS to user space.
    3. The object file record lists
    4. This state list is likely incomplete, but hopefully there are not a large number of differences.

Issues

  1. The terminology user-mode vs driver mode is poor. The kernel still wants to treat driver and user space as "user mode", so perhaps some term other than "driver mode" is needed to distinguish when the VAS is the DS VAS?
  2. It would be nice to prevent user space code from executing in driver mode. E.g., ensure that DSL code doesn't accidentally call a user mode function.
    1. Perhaps mark all user space VAS addresses as no execute in the DS VAS?
  3. PRLs provide a way to do hardware initialization at system startup so there is no race conditions. If driver space doesn't have a similar capability that will re-open the race condition.
  4. PRLs require a stack analysis to ensure they are within the kernel allowed stack size. It would be nice if driver space could have a variable stack size so that normal watermarking would work.
  5. PRLs can respond to mode changes, would there be a way to enable driver space drivers to do the same?

Roadmap

These are the substantive changes that have been proposed for the kernel that are not yet in a project.

  • Are there changes made by Honeywell that need to be brought forward?
  • POSIX
  • FACE_Project
  • Multi-core ARINC 653
  • Support for Ada (unsure if there are any kernel changes required).
  • Make HAL a separately loaded .so (to speed processor porting).