Verification Ideas

From DDCIDeos
Jump to navigationJump to search

Charge code:
Project: 2150-000-600 Deos Maintainance & Support
Task: 22 Process Improvement

Overview

The purpose of this page is to capture our frustrations with processes/procedures, along with ideas for improving our verification plans and procedures. At present the intent is to be a free form, stream of consciousness, organized by the component (i.e., a section per component which was being worked when the frustration or process improvement idea occurred).

Priority Items

SCMP Processes & Procedures

CCB and PCRs

Objectives:

  • Leverage the time spent on CCB activities, and make effort useful to development teams and management.
  • Reduce or eliminate repetitive (non-value) activities (e.g., the multiple review of hundreds of on-hold PCRs during a kernel release CCB).
  • Ensure all DO-178C Objectives are met.

Requirements:

  • Development teams and management must have timely insight into PCRs being worked for specific release.
  • PCRs cannot be worked unless approved by management.
  • CCB meetings must be attended by appropriate people.

Current CCB process

Activity Description DO-178C Goals DO-178C Objectives DO-178C Activities Current Process Frequency Current Process Notes
CCB1 Confirm, Close, Reopen, Reassign 7.0[d] 7.1[d] 7.2.4[c] Weekly Confirm,Close
7.0[e] 7.1[e] 7.2.4[a],7.2.5[b] Weekly Confirm,Close
CCB2 Determine Applicable Version and Priority 7.0[d] 7.1[d] Weekly Limited
CCB3 Assist with Assignee and Requestee(s) 7.0[e] 7.1[e] Weekly Limited
CCB4 Assess Required Level of Completeness 7.0[d] 7.1[d] Weekly Limited
7.0[e] 7.1[e] Weekly Limited
CCB5 Status for a Given Product Version 7.0[d] 7.1[d] 7.2.4[c] Release Yes
CCB6 Assess Need for Branch or Merge 7.0[d] 7.1[d] Release Yes
CCB7 Review PCRs on Hold 7.0[e] 7.1[e] 7.2.4[c] Release Yes
CCB8 Review Process Noncompliance PCRs 7.0[e] 7.1[e] 7.2.4[c] Quarterly Yes
CCB9 Follow Up 7.0[e] 7.1[e] Any As Needed
Evidence Yes
Summary of short-comings with current process:
  • In general, the weekly CCB has limited effectiveness/usefulness for development teams due to lack of participants (key developers not present).
  • Also, limited effectiveness for management, since approval for PCR work is not being enforced.
  • While evidence is generated that the CCB was completed, CCB tasks CCB1-CCB4 are not fully performed. In particular, the current process does not provide oversight for how a PCR impacts other components.
  • Additionally, we are unable to leverage any work done in the weekly CCB to streamline our component release activities.

New CCB and PCR Process Proposal: (PCR:10343)

Break single weekly CCB meeting into 4 "component-centered" and "event-driven" CCB meetings. Note: Level D/E components, along with OpenArbor, TRAC and SCORE, are not required to comply with DO-178C requirements; however, a consistent development process is applied across all components. To identify CCB meeting minutes generated for non-certified components, include an identifier in the title , eg, "DAL-E-CCB...", "OA-CCB...", as this distinction will be useful for auditors.

Define 3 Categories of CCB Activities (down from 9)

CCB1: Assess Impact - create PCRs, define Summary, Description and Severity

  • Identify impact to SW Lifecycle data w/i component: Reqs, Code and Test (use Flags)
  • Identify impact to other component(s), and create additional PCRs
  • DO-178C Objectives: 7.2.4a, 7.2.5a, 7.2.5b
  • Requires management and marketing attendance.
  • "Deos Product Availability Guide" meeting - currently occurs monthy; already performing CCB1 and CCB2 activities

CCB2: Approve and Assign - set Status (Assigned), Assignee, Version, and Priority

  • DO-178C Objectives: 7.0e, 7.1d, 7.1e, 7.2.4b, 7.2.4c
  • Requires management input. Work is only allowed on PCRs that have been assigned.
  • Include charge code in PCR?

CCB3: Assess Status - set Status (Close or Reopen)

  • This is where compliance to SDVP is verified: completeness of changes; monitoring Flags for status of impacted lifecycle data (Reqs, Code, Test); closing process non-compliance PCRs; assess PCRs on Hold; etc.
  • DO-178C Objectives: 7.0d, 7.0e, 7.1d, 7.1e, 7.2.4c
Define 4 Types of CCB Meetings to Perform CCB activities
New CCB Process: New Definitions of CCB Meetings and Activities
CCB Meeting Frequency Attendees CCB Activities Notes
Product Planning Prior to work starting on next release; agenda item on weekly team meeting (as needed) Lead developers from all components involved, management and marketing CCB1 and CCB2 Establish baseline and generate CCB evidence (including meaningful minutes)on the set of PCRs to work for the next release. Also applies to Branching or Merging a component.
Active Development As needed (typically weekly) Component team lead and member(s) and management CCB1, CCB2 and CCB3 Generate CCB evidence and report to management(see Note below for details on report).
Product Release Prior to release Component team lead and member(s) CCB3 Generate CCB evidence.
General Quarterly and Event-driven (e.g., transition to DO-178C) SQA, pertinent team member(s) and management CCB1, CCB2 and CCB3 Address documentation PCRs (non-component related). Generate CCB evidence.

CCB Evidence and Management Reports
1. All CCB meetings will generate CCB evidence using the ccbm.py script. This file should be included in the cygwin installation (deos-maintainer-tools), so that it is easily accessible.
2. Management Reports - produced on-demand (not auto-generated)
a.) Identify PCRs that have transitioned to Assigned during a specified date range. Search/sort on component and Priority.
b.) Identify work accomplished on a component during a specified date range. Triggers include: commits, and Status transitioning to Resolved.
c.) Identify changes that may affect work scope. Triggers include any changes to the following fields: Priority, Severity, Status (other than Resolved or Closed), Hardware, Target Milestone or Flags (transition to "?").
d.) Identify impacted lifecycle items using the cmpStatus2src.cgi script. Add a switch to the script to "fix problems", which will update the status file; additions and deletions must be fixed by a human, but the script will identify these actions.

Proposed Changes to Bugzilla/PCRs

The objective is to leverage Bugzilla's capabilities as part of the CCB process improvement. This requires updating to a newer version of Bugzilla. The shift towards "component-centered" CCB Meetings shifts PCR activities to each component's development team. Anyone can create a PCR, whereas assigning a PCR (which enables work to be performed) will require management approval if the "Impact Assessment" field is NOT trivial. Bugzilla capabilities can be used to identify and report rule inconsistencies and changes to PCRs that affect work scope.

Continue current PCR practices:

  1. Create a new PCR: performed by anyone/anytime; however, PCRs must be assigned before they can be worked.
    • Fields filled out when the PCR is created: Product and Component.
    • Default values for new PCRs: Initial State (New), Severity (TBD), Priority (TBD), Target Milestone (TBD) and Impact Assessment (TBD)
  2. Resolve or Reopen a PCR: performed by anyone/anytime; requires a comment.
  3. Close a PCR: CCB2 Activity (during Active Development or General CCB meeting).

Define new PCR rules:

  1. Add a DO-178C classification attribute to each component, via a string in the component's description, "Component DO-178C Classification: <classification>". Classification values: DAL-A, DAL-B, DAL-C, DAL-D, DAL-E, TQL-4, TQL-5, UDT (Unqualified Development Tool) or PPP (Plan, Procedures and Packaging). This attribute is used to validate the values of Severity and Priority as follows:
    Mapping DO178-C Classification to Allowable Severity and Priority Values
    Classification Severity (See Note 2) Priority (See Note 3)
    DAL-A, B, C & D and TQL-4 TBD, Defect, Enhancement, Limitation, Process Non-Compliance or Work Step. TBD, Next Release, Any Upcoming, By Cert or Hold
    DAL-E and TQL-5 TBD, Defect, Enhancement, Limitation, Process Non-Compliance or Work Step. TBD, Next Release, Any Upcoming or Hold
    UDT and PPP TBD, Defect, Enhancement or Work Step. TBD, Next Release, Any Upcoming or Hold

    Note 1: The DO-178C Classification, Severity and Priority values are consistent between the public and private Bugzilla installations.
    Note 2: Transitioning Status to Assigned is not allowed if Severity=TBD.
    Note 3: "Hold" requires justification in special field; use "commenton". Transitioning Status to Assigned is not allowed if Priority=TBD.
    Implementation Details: Add a hook to verify the "Classification" string was defined in the Comment when a new component is created. Also, add a hook to display the error message "Component Classification is not defined" when a user creates a PCR for an existing component without a "classification". This safety hook will catch any components that were not updated with a component "classification" when this feature was added.


  2. New Impact Assessment field: prior to assigning a PCR, an assessment must be performed to determine effort (time required to implement/test the fix) and system impact (impact on other components). "Trivial" changes do not require management approval, but all other changes do. For a change to be "Trivial", it cannot impact other components, Severity = "Defect" (for DAL A-D) OR "Limitation" (for DAL E components) OR , and effort is <= 1 day. Values: "Trivial" (impacts 1 component AND effort <= 1 day), "Light" (effort > 1 day AND <= 1 week), "Medium" (effort > 1 week AND <=3 weeks) and "Heavy" (effort > 3 weeks). Note: if the change impacts multiple components, PCR(s) should be created for each component impacted and recorded in the Depends On field.

  3. Assigning PCR: CCB2 activity during formal or informal CCB meeting; eg, may be performed during Active Development CCB meeting, weekly Deos team meeting, or monthly Product Availability Guide meeting. Assigning a PCR includes updating the following fields:
    • Assign To: provide a pull down list of valid email addresses
    • Accept PCR: update Status to Assigned
      • Assigning a PCR requires CCB privileges (CCB login) and management approval if Impact Assessment is NOT "Trivial". This also applies to"Trivial" PCRs that have already been assigned when the developer updates the Impact Assessment to a different value (Light, Moderate or Heavy). Management approval requires management attendance at the CCB (noted in the CCB meeting minutes with "DDC-I Management" after the attendee's name).

  4. Update rules on subversion commits: commits are only allowed when Status=ASSIGNED.

  5. This Feature is on HOLD: Use Bugzilla's Time Tracking feature to track estimated and actual time worked on a PCR. Next step is to somehow link this information to timesheets and project/charge codes (see PCR 7028).

  6. Enable tags: this is a new Bugzilla feature that allows users to add pre-defined tags to a PCR. See the following example:
    • tag=kalbi-boot: this tag could be attached to any/all PCRs of interest to kalbi-boot developers. Tags can then be used as search criteria.

Note 1: Any non-lifecycle data items which does not require a PCR, eg, Release Notes, does not require CCB or mgmt approval to modify.
Note 2: Level D/E components and OpenArbor, Trac and Score are not required to follow DO-178C standards; however, DDCI SW development plans apply to all components, so PCR rules are common across all components.

Suggested Improvements to the New Process

Meeting occurred on 8/24/16 with several team members. Outcomes/decisions documented for each item.

  1. CCB Meeting Minutes:
    • Instead of using a number to identify multiple meetings on the same date, add a descriptive token. Richard wrote: We will not typically have multiple CCB's for a given component or set of components in a single day. Therefore, if we use tokens like "booyah release", "kernel planning", or "intel-mc pal approval" that could be prepended to the CCB visited token added to the PCR's and in the query placed in the minutes avoiding having to find the correct CCB iteration and providing more documentation as to the reason for the CCB.
    • Decision 1: Update the java script to create a unique identifier for the CCB meeting minutes, such as incorporating a timestamp. This will replace the "-interation" number being appended. Aaron is working this. Done
    • Decision 2: Update the python script which adds the "CCB visited this PCR on..." to incorporate the type of CCB meeting and component(s), to provide better search/grep capability. Need an assignee to work this; also requires updating the CCB howto.
  2. ASSIGNEE: set to actual user vs fake user
    • Option 1: set to actual user, and set "default CC list" to fake user. The ASSIGNEE field can be left blank (or TBD) even if the PCR Status is set to "ASSIGNED". In other words, a PCR can be approved to work (ASSIGNED), but the ASSIGNEE is still unknown. Once a maintainer starts working the PCR, update the ASSIGNED field. And, there's no rule/hook which precludes multiple maintainers from working a PCR (and making commits).
    • Option 2: default to fake user, and use the FLAG_ASSIGNEE to (optionally) identify the maintainer performing the work.
    • Decision: Go with Option 2. Kelly will update the PCR howto and Bugzilla online help. Done
  3. FLAG_REQUESTEE (field to the right of the flag): is it OK to leave this blank?
    • In the past, the FLAG value was meaningful, but not the FLAG_REQUESTEE. Does the team want to use the FLAG_REQUESTEE to help track who's working what?
    • AL wrote: Create a global search for "My flag assignments"...if that makes people's lives easier. Note: you can already search by FLAG_REQUESTEE; this field is listed in the Custom Search options.
    • RLF wrote: It seems like we need to update the flags description as well. I don't think at the planning CCB or even some active development CCB's we'll know who is going to be responsible for a test case/procedure that goes with a planned feature. It seems like blank should be allowed until CCB3 where we confirm flags are appropriate to close the PCR.
    • Decision 1: FLAG_REQUESTEE is optional. Kelly will update the PCR howto and Bugzilla online help. Done
  4. "Commit" hook suggestion: Instead of triggering on ASSIGNED, only allow commits if PRIORITY is set to a valid release (not TBD or HOLD).
    • With the new process, the PRIORITY must be set to a valid release when a PCR's status is set to ASSIGNED. What is the advantage of using the PRIORITY field?
    • RLF wrote: I agree with using reports to monitor priority/assignment changes rather than forcing CCBs to get approval. With that triggering the commit hooks on PRIORITY + RESOLVED checks makes sense.
    • Decision: Use the PRIORITY field in the commit hook instead of the status field. Stephen will update the hook, and Kelly will update the PCR howto and Bugzilla online help. Done
  5. Impact Assessment:
    • Change the definition of TRIVIAL to be the aggregate amount of effort for all impacted components. Currently, if a problem impacts multiple components, the TRIVIAL value is not allowed.
    • Decision: Remove the criteria pertaining to multiple components. The Impact Assessment is solely based on the time/effort to make the changes. When multiple components are impacted, the developer will create multiple PCRs, and use the Depends On field to link them. Kelly will update the PCR howto and Bugzilla online help. Done
  6. In the CCB howto, add clarity to the Component Release CCB section:
    • From RR and RF: 3.)Make sure PCRs that are not on "Hold" and are not "Any Upcoming" without changes since [planning] CCB are sufficiently complete for release. Note: OK for "opportunistic" PCRs to oscillate between "Any Upcoming" and "Hold", but not required.
    • 5.)Make sure all open PCRs are set to "Hold" or "Any Upcoming" and have not been changed since planning CCB. From AL: I think the reason the kernel folks have moved PCRs from "Any Upcoming" to "Hold" is simply because we didn't want to have to figure out how to construct a query that excluded "Any Upcoming" + no changes.
    • Decision 1: Add comment to CCB howto on options for handling "opportunistic" PCRs; Priority of ANY UPCOMING implies opportunistic. If such a PCR is being deferred to the next release, the Priority may be updated to HOLD or NEXT RELEASE, or left as ANY UPCOMING. Kelly will update the CCB howto. Done
    • Decision 2:Add more details to the "Component Release CCB" section pertaining to "assessing all open PCRs"; PCRs on Hold are excluded from the assessment. Also add more details to the "Planning CCB" section on generating the set of relevant PCRs for the next release; Priority may be set to Next Release, By Cert or Any Upcoming. These options allow for interim releases without requiring additional Planning CCB meetings. This addresses item 4) in the "Ryan's Proposed Changes..." (see below). Done
  7. Propose replacing the requirement for management approval before assigning PCRs with 2 reports:
    • Oversight Report: PCRs that have become "committable" since time T (programmable); committable includes PCRs whose Priority transitioned to "Next Release", "Any Upcoming" or "By Cert". Search criteria excludes TRIVIAL PCRs and those approved by Planning CCB. Aaron created an Oversight PCRs report for T=4 days, located at the bottom of the Bugzilla search page.
      • Note: This enables work to begin immediately, but provides the opportunity to management to redirect effort if needed.
    • Guidance Report: PCRs with Priority = TBD; search criteria does not include a time "T" constraint.
  8. Ryan's Proposed Changes to CCB Meeting categories:
    The items below have been addressed by the items above. 1) New kernel baseline development is started: ("Planning" CCB)
    • Go through all the kernel PCRs and decide what ones we plan to work. Update Priority:
      • PCRs that we don't plan: Priority = "Hold"
      • PCRs that are planned to be worked: Priority = "Next Release" or "By Cert" (as appropriate)
      • PCRs that are to be worked if the opportunity presents itself (e.g. another change will be made in the same area): Priority = "Any Upcoming"
    • For PCRs that are not on "Hold" assess the flags to make sure they have '?' for each potentially impacted item.
    • One step that I think should be added to the above is write PCRs for the set of new planned features, in the past we have done this as a "just in time" activity but I think if we do it up front it will provide for better program management visibility.

    2) During development: ("Active Development" CCB)

    • New PCRs created as needed and assess their priority based on the criteria above. If the impact is "large" then the team and management is consulted to help prioritize if necessary (some times it is a "must fix" bug so all we can do is just inform management of the impact). Set the Flags to '?' for anything that may be impacted.
    • As various pieces of a PCR finish development set the associated flag to '+'. Once all impacted elements are '+' the PCR can be closed or left open pending review (this is the developer's choice).
    • If we decide not to work a PCR that was "planned" (this can happen when a customer cancels (e.g. Bendix King ARM port) or we decide to go with a different implementation to a feature/problem) or "opportunistic", set Priority to "Hold" or resolve as "Won't Fix". Note: This step happens more frequently as development nears the end.

    3) Before a release: ("Component Release" CCB - may be an interim or cert release)

    • Make sure PCRs that are not on "Hold" are sufficiently complete for the release.

    4) After an interim release: ("Active Development" CCB)

    • Re-prioritize the "Next Release" and "By Cert" PCRs for the upcoming release, which may be another interim release or cert release; change some "By Cert" to "Next Release" or vice versa.

    5) When verification is complete: ("Component Release" CCB)

    • Make sure all open PCRs are on "Hold".

    Documents/Files Impacted by New Process:

    1. change-control-board-howto
    2. problem-reporting-howto
    3. software-release-howto
    4. sqa-procedures-howto
    5. configuration-management-howto
    6. customer-acceptance-howto
    7. delta-baseline-howto
    8. document-publishing-howto
    9. software-change-howto
    10. software-component-verification-howto
    11. software-release-howto
    12. ccbm.py

    Add'l Docs which may be impacted (reference "PCR")

    1. howto/certification-archive-howto/certification-archive.py
    2. howto/certification-archive-howto/certification-archive.sh
    3. howto/compiler-assessment-for-abc-howto/compiler-assessment-for-abc-howto.htm
    4. howto/document-publishing-howto/document-review-checklist-help.htm
    5. howto/howto-howto/howto-howto.htm
    6. howto/requirements-coverage-analysis-howto/cvt-requirements-coverage-analysis.htm
    7. howto/review-process-user-howto/baselineStatusReport.py
    8. howto/review-process-user-howto/dpsf.py
    9. howto/review-process-user-howto/review-process-user-howto.htm
    10. howto/review-process-user-howto/ReviewProcessGame.htm
    11. howto/review-process-user-howto/testCaseChecklist.htm
    12. howto/review-process-user-howto/testProcedureChecklist.htm
    13. howto/sqa-procedures-howto/audit-report-template.htm
    14. howto/sqa-procedures-howto/witness-template.htm
    15. howto/structural-coverage-analysis-howto/structural-coverage-analysis-howto.htm
    16. howto/tools-howto/tools-howto.htm
    17. plans/psac/do178c-plan-for-software-aspects-of-certification-for-ddci-software
    18. plans/scmp/do178c-software-configuration-management-plan-for-ddci-software
    19. plans/scmp/pcr-life-cycle.vsd
    20. plans/sqap/do178c-software-quality-assurance-plan-for-ddci-software
    21. plans/tqp/tool-qualification-plan

    Single CM and PCR Database - Rejected

    This proposal has been assessed, and will not be implemented.

    Objectives:

    • Consider schemes, with Honeywell buy-in, for a return to one PCR and one CM development database. The concept would be to utilize access control levels within subversion and bugzilla.
    • Problems with having 2 copies of svn and bugzilla:
      • Confusion on what is shared and what is not
      • Items in DDC-I repository that should be in shared
      • Shared plans and procedures that reference DDC-I specific items, passwords are an example.
      • Shared Cert archive data
      • Honeywell has live repository for shared items. They can work on and change items we may not want to accept. Work on ANSI library and LWIP are examples.
      • Cannot utilize "Depends On" or "Blocks" fields in bugzilla to cross reference PCRs in the 2 databases.

    Requirements:

    • Eliminate confusion for DDC-I developers about shared or not shared
    • Identify and eliminate/reduce interference patterns in the shared between DDC-I interests and Honeywell interests.

    Status:

    As part of the process improvements for CCB and PCRs (see above), DDCI is returning to one Bugzilla database.

    Manual Code Review

    Objectives:

    • Reduce manual code review time (use tools, hereunder static analysis tools).

    Requirements:

    • TBD

    Process Proposal

    • TBD

    Questions:

    1. What items in our standards (checklists) can be automated in a tool?
    2. What are the qualification requirements for tools used to check standards?
    3. Can the tools be customized as we change standards?
    4. What are the costs of obtaining/maintaining the tools?

    Status Files and Databases

    Projects currently contain process/CodeStatus.txt and others that manage several bits of information relating to verification. (These are only required for Level A projects.)

    The status file currently answers these questions:

    • What are all the files in a project (must be in sync with scm)?
    • Is a file in scope for review? If not, call it a 6.
    • When, where, and by whom was the file last reviewed (for the purpose of doing a diff review)?
    • Is the file probably going to change before the next verification? If so, call it a 2.
    • Does the developer think the file is ready for review? If so, call it a 3.
    • Who is currently reviewing this file? And if anyone, call it a 4.
    • Has the review been completed? If so, call it a 5 (and this must be in sync with the cert archive).

    Additionally, management has requested an additional feature of predicting what new files will probably be created before the next verification. (more abstractly, some way of estimating how much additional work is not included in the above.)

    Problems

    • There is duplication, which is difficult to maintain:
      • List of files in the status file vs list of files in svn.
      • When, where, by whom was the latest review vs the review packets in the cert archive.
    • Committing a text file with changes to svn is not ideal for avoiding race conditions.
    • The numbers 1-6 are hard to remember and understand.

    Proposal

    Use svn properties

    Move the following information from the status files to svn properties:

    • Is a file in review scope?
    • Where was a file last reviewed, if not at its current location? This information would be an svn path and a peg revision. Usually this is automatically tracked by svn cp.

    Make the cert archive a real database

    While currently, it is a collection of printer-friendly renderings of an interactive web form, the cert archive should really be a query-able database with nearly instant response time (e.g. PostgreSQL).

    This answers the following questions:

    • When and by whom was the file last reviewed (for the purpose of doing a diff review)?
    • Has the file review been completed? I.e. has the HEAD revision in svn been reviewed?
    What about auditors who expect PDF?

    Give the cert archive database the ability to render a review as a PDF. Our current PDFs are already just renderings of the real form. More generally, they are renderings of the review information. The information can be stored in a relational database and rendered to PDF without being dishonest.

    What about the digital signatures?

    When a review is completed, the cert archive program can generate a signature (which is just bytes) and store it in the database alongside all the other review information. (Alternatively, a human could generate some kind of signature and provide it to the database, if we're not comfortable with a machine signing the review information.) The signature can also be rendered as a .sig file alongside the rendered .pdf file.

    Project coordinating and estimates

    Use a separate database to answer the following questions:

    • Is a file probably going to change before the next verification?
    • How many more files will probably be created before the next verification?
    • Does the developer think a file is ready for review?
    • Who is currently reviewing what files?

    Identifying verification baselines

    A verification baseline can be captured as a single svn path with a peg revision. Using this information, the cert archive database described above can tell you what files changed between two baselines or since the most recent baseline.

    This effectively merges statuses 1 and 5.

    Data migration

    We've got lots of important verification data that's currently stored in pdf form. We'll need to migrate this out of pdf and into the database proper and svn properties. Cross checking the PDF data against that status files will probably uncover several inconsistencies, which will need to be addressed. The good news is that this migration will only need to happen once, and then the new system will sustain itself.

    Looking far into the future, we'll probably eventually want to migrate this data yet again to a new system. This migration will be significantly easier than the above, because the information will not be as duplicated, and mechanically reading a database is easy.

    Report Documents

    The Problem:

    We currently divide the report document DO-178 objectives' implementation into three separate Microsoft Word documents. These are the Software Life Cycle Environment Configuration Index (SLCECI), Software Configuration Index (SCI), and Software Accomplishment Summary (SAS). We "clone and own" copies of these documents for each software component that requires them. The problems should be obvious: We are copying document content, thus duplicating information that must be reviewed and accepted. Engineers can lose time trying to figure out which existing document set to copy from. Lastly, expensive verification tools, i.e. engineers are being used to review adminstrivia (headers, footers, copyrights), and boilerplate prose over and over again.

    Basically, the above is not scaleable beyond its traditional use-case of one verification baseline per every two to three years.

    Proposed Fixes:

    Collapse the three report documents into a single report document. This will eliminate some duplication across the three, and reduce the review burden.

    Port report document content to XML. Subdivide this XML into a collection of files that can be reviewed and accepted once, and then re-used. Also, once moved to XML, we can bring some of its tricks to bear to further reduce human calories burned in copy-n-paste, e.g. use xinclude to insert requirements coverage analysis content directly into the appropriate report document section.

    Just Evaluate and Do

    Add minor cleanup or nice to have items here. Move to #Completed_Items when the fix is in and time for peer review (recommended 1 week) has expired.

    RFS Phase

    PLT verification

    The linker injects a jump table for resolving external symbols. This table (at least on x86) contains code to support lazy binding. Because the kernel performs binding at image load time this code is deactivated by the level A kernel linker. Since this code is injected by the linker the abc tool does not instrument it. This falls under the category of object code that does not directly trace to source. We should capture a justification whey additional verification of the PLT is not necessary.

    Jump Table Concern

    Even though we don't use jump tables with the ABC compliant coverage, we cannot be sure that jump tables with repeated values are correct, unless we take a look. Unless we are sure that we have tried all possible choice values for a jump table, an interval, e.g. 7..10, must be sure that actual 7..10 choice values will actually (all 4) go to the same destination. If our test has tested only 8 and 9, then how can we be sure that 7 and 10 would get us to the same destination?

    The sole kernel jump table is one that selects one of 24 possible ELF-defined DT_... values in an ELF or PE file. The source code for both PPC and X86 mentions only those 13 that are relevant for PPC and X86 (different sets), and the remaining 11 use the default. It is probably not possible to create a test case that defines all those object file defined DynamicArray DT_xxxx values that are not selected, so inspection is likely the only way to assure, that should we encounter any of these choice values that are to end in default, then the jump table will take them to the default code. This is what the compiler/linker is supposed to do, and which it probably also does, but we cannot be sure that it is done, unless we check it, can we?

    Component Specific

    Kernel

    can-alt

    The kernel readme.txt, and the kernel and common test howtos do not make it crystal clear what the relationships are between problems.mk, .can-alt, and the procedure for doing final run for score.

    As it is now, problems.mk and the .can-alt are development resources, not intended for use during RFS.

    Alternately, we could work can-alt into to the final RFS, e.g., the foobar platform does not have FPU, so the "no-FPU" .can-alt files should be used in preference to the .can files. This would catch inadvertent false pass tests and minimize work.

    Alternately we could do away with .can-alt entirely and have the tests properly diagnose the cases that now require a "this test case will fail" messages. AL: doesn't like this on first principles because saying "I tested this and it passed" is not the same as "I determined I don't need to test this, so it passed". It seems like a slippery slope. The PPC FPU case is the canonical example of how you can get screwed. The test infrastructure snoops for FPU support and returns a bool, but PPC FPU is really three valued: {none, classic-fpu, spe-fpu}, and some tests would have false passed had we assumed that not(classic-fpu)==none. Having an "iron clad" answer, e.g., a table mapping PVR to capabilities, would lessen the issue, but the first principle above is still concerning.

    Processor Tables

    The kernel and boot/PALs make use of the kernel's processor-table.dtd which maps register like processor resources into massive tables with very many footnotes. It might be better to have a table summary with textual requirements, all generated from the same DTD content.

    Javascript Frames

    Josh and AL: have been working a "Deos after dark" project that might provide an alternate, namely a javascript like frames view that makes the footnotes more accessible. For an example, see the inetd-user-guide

    LSP & Code Checklist for OOT

    Should we add LSP to code review checklist?

    CFFS MAL

    New Test Harness

    As part of the cffsi7sata MAL test effort, a new light weight test infrastructure was created that:

    1. Could be used from the command-line like the legacy
    2. Could simultaneously use DDS for test procedure development & debug

    Additional worked occurred using PCR:9742 in an effort to make this new harness useful to others. This work should be evaluated and a determination made wether to continue with this new harness as an option for future test efforts or wether the capabilities can/should be added to the existing harness and regress script technology.

    Completed Items

    Delta Baseline Howto

    • The howto is ambiguous with regards to non-primal life cycle data. This section could be interpreted to mean that a SAS change would require a delta cert and capsule release.
      • PCR:9905 written and howto updated to distinguish between non-primal life cycle data in the customer release and non-primal life cycle data not in the customer release.
      • Delta Baseline Howto

    Software Release Howto

    Integration Checklist

    Ideas include:

    • Remove the compiler version, make version, etc, since this is in the BUILD-ENVIRONMENT-README.txt
    • I found at least 2 people who could not "Save as" pdf. Perhaps we should just write the address of that link instead of making the YES link to it. I don't think the hassle is worth the effort.
      • No change here.

    Compiler Assessment

    There was a recent update to the compiler assessment which causes each set of switches to be reported in a separate .s file. Right now there are only a couple to choose from, but the number of assessments will increase and make it difficult to determine the correct assessment to use in each component. The switches are all printed in the first line of the assessment. I proposed a python script (findCompilerAssessment) that would take the list of switches and the compiler version, strip out the switches that can be ignored, compare them against the compiler assessments currently in the archive, and return the path to the correct assessment if one exists.

    Structural Coverage Analysis Howto

    The structural-coverage-analysis-howto.htm does not point to a template for a report. As such, warnings in the output can be missed which need documenting. Looking for warnings and justifying them should be added to the "What to do if this is a Run For Score" section.

    Relax/Remove Coverage Analysis Checklist Item

    The coverage analysis checklist has the following item:

    All analysis activities for Requirements tags that are verified by analysis have been completed and the 
    evidence for such stored within the certification archive.
    

    This item creates a bottleneck in the verification process that forces all the reviews of the Analyses to be completed before the Requirements Coverage Analysis can be completed.