Posts Issued on January 14, 2025

Fault treeの自動生成 (17)

posted by sakurai on January 14, 2025 #928

Method3での新旧係数比較

昔の記事の表217.1にMethod3での手作業の係数がまとめてありますが、これとChatGPTによる係数の割り当てを比較します。

まずChatGPTにカットセットの旧の表(表217.2)と新の表(表925.1)を見せ、old.txtとnew.txtを作成してもらいました。

old.txt

C4,M1,M2
C2,M2,SC1
C2,M1,SC2
C1,SC1,SC2
C1,SA1,SA2
C2,M2,MCU1
C2,M1,MCU2
C1,MCU2,SC1
C1,MCU1,SC2
C5,I2,M1
C5,I1,M2
C1,MCU1,MCU2
C3,I2,SC1
C3,I1,SC2
C7,I1,I2
C3,I2,MCU1
C3,I1,MCU2
C8,I2,P1
C8,I1,P2
C6,M2,P1
C6,M1,P2
C4,P2,SC1
C4,P1,SC2
C9,P1,P2
C4,MCU2,P1
C4,MCU1,P2
C2,D2,M1
C2,D1,M2
C1,D2,SC1
C1,D1,SC2
C1,D2,MCU1
C1,D1,MCU2
C3,D2,I1
C3,D1,I2
C3,CA2,SA1
C3,CA1,SA2
C4,D2,P1
C4,D1,P2
C1,D1,D2
C7,CA1,CA2

new.txt

C12,M1,M2
C17,M2,SC1
C17,M1,SC2
C15,SC1,SC2
C15,SA1,SA2
C17,M2,MCU1
C17,M1,MCU2
C15,MCU2,SC1
C15,MCU1,SC2
C19,I2,M1
C19,I1,M2
C15,MCU1,MCU2
C16,I2,SC1
C16,I1,SC2
C18,I1,I2
C16,I2,MCU1
C16,I1,MCU2
C13,I2,P1
C13,I1,P2
C14,M2,P1
C14,M1,P2
C12,P2,SC1
C12,P1,SC2
C11,P1,P2
C12,MCU2,P1
C12,MCU1,P2
C17,D2,M1
C17,D1,M2
C15,D2,SC1
C15,D1,SC2
C15,D2,MCU1
C15,D1,MCU2
C16,D2,I1
C16,D1,I2
C16,CA2,SA1
C16,CA1,SA2
C12,D2,P1
C12,D1,P2
C15,D1,D2
C18,CA1,CA2

さらにそれを比較するプログラムを作成してもらいました。

#!/usr/bin/env python3
import sys
import csv
from collections import defaultdict

"""
Usage:
  python transform_coverage.py old.txt new.txt

Where:
  old.txt, new.txt each line looks like:
    C4,M1,M2
    C1,SA1,SA2
  etc.

We parse the first token as coverage name (e.g. 'C4'),
the rest as element names (e.g. 'M1','M2').
We store them in a dictionary keyed by the sorted tuple of elements.

Then we produce a transform table:
  old_coverage -> new_coverage
for each matching element-tuple.
If multiple old coverages map to the same new coverage or vice versa,
we show those collisions or many-to-many relationships explicitly.
"""

def parse_cutset_file(filename):
    """
    Parse lines like "C4,M1,M2" into:
      coverage='C4'
      elements=('M1','M2')  # sorted
    We'll store coverage -> set of element_tuples
    We'll also store element_tuple -> coverage
    Returns (coverage2sets, element2cov)
    """
    coverage2sets = defaultdict(set)   # coverage -> { (el1,el2,...) , ... }
    element2cov = {}
    with open(filename, "r", encoding="utf-8") as f:
        for line in f:
            line=line.strip()
            if not line or line.startswith("#"):
                continue
            # split by comma
            parts = [x.strip() for x in line.split(",")]
            coverage = parts[0]
            elements = parts[1:]
            # sort elements so (M1,M2) == (M2,M1)
            sorted_e = tuple(sorted(elements))
            coverage2sets[coverage].add(sorted_e)
            element2cov[sorted_e] = coverage
    return coverage2sets, element2cov

def main(oldfile, newfile):
    old_cov2sets, old_elem2cov = parse_cutset_file(oldfile)
    new_cov2sets, new_elem2cov = parse_cutset_file(newfile)

    # We'll create a transform table: oldCov -> newCov
    # by checking each element tuple in old
    # and see what coverage is assigned in new.
    transform_map = defaultdict(set)  
    # key=oldCoverageName, value=set of newCoverageNames
    # Because it's possible multiple new coverages appear.

    # Also track if some old coverage references multiple distinct new coverage
    # or vice versa.

    # all element tuples from old
    for etuple, oldcov in old_elem2cov.items():
        if etuple in new_elem2cov:
            newcov = new_elem2cov[etuple]
            transform_map[oldcov].add(newcov)
        else:
            # no match in new => discrepancy
            transform_map[oldcov].add("NO_MATCH_IN_NEW")

    # Now produce a nice table
    print("=== Coverage transform table ===")
    for oldcov in sorted(transform_map.keys()):
        newcovs = transform_map[oldcov]
        if len(newcovs)==1:
            single = list(newcovs)[0]
            if single=="NO_MATCH_IN_NEW":
                print(f"{oldcov} => [NO MATCH in NEW file!]")
            else:
                print(f"{oldcov} => {single}")
        else:
            # multiple new coverages
            nclist = ",".join(sorted(newcovs))
            print(f"{oldcov} => {{{nclist}}}  # multiple new coverage found for same old coverage")

    print()
    print("=== Reverse check: new coverage => old coverage ===")
    # We'll do similarly in reverse
    # create new_elem2cov from parse => done
    # but we want coverage -> set of element tuples
    rev_map = defaultdict(set)  # new-> old coverage
    for etuple, newcov in new_elem2cov.items():
        if etuple in old_elem2cov:
            oldcov = old_elem2cov[etuple]
            rev_map[newcov].add(oldcov)
        else:
            rev_map[newcov].add("NO_MATCH_IN_OLD")

    for newcov in sorted(rev_map.keys()):
        oldcovs = rev_map[newcov]
        if len(oldcovs)==1:
            single = list(oldcovs)[0]
            if single=="NO_MATCH_IN_OLD":
                print(f"{newcov} => [NO MATCH in OLD file!]")
            else:
                print(f"{newcov} => {single}")
        else:
            # multiple old coverage
            oclist = ",".join(sorted(oldcovs))
            print(f"{newcov} => {{{oclist}}}  # multiple old coverage found for same new coverage")

    print()
    print("=== Detailed mismatch check (cutset by cutset) ===")
    # We'll unify the union of all element tuples
    all_etups = set(list(old_elem2cov.keys())+ list(new_elem2cov.keys()))
    for et in sorted(all_etups):
        ocov = old_elem2cov.get(et,"-")
        ncov = new_elem2cov.get(et,"-")
        elements_str = ",".join(et)
        print(f"Cutset({elements_str}): old={ocov} , new={ncov}")

if __name__=="__main__":
    if len(sys.argv)<3:
        print("Usage: python transform_coverage.py old.txt new.txt")
        sys.exit(0)
    oldfile=sys.argv[1]
    newfile=sys.argv[2]
    main(oldfile,newfile)

ChatGPT の回答は必ずしも正しいとは限りません。重要な情報は確認するようにしてください。

次に以下のコマンドにより、新旧の対応を表示します。

\$ python transform_cov.py old.txt new.txt | head -10 | tail -9

この出力は

C1 => C15
C2 => C17
C3 => C16
C4 => C12
C5 => C19
C6 => C14
C7 => C18
C8 => C13
C9 => C11

となるので、これを表にまとめます。新係数C??の値は直接MARD中のBEIファイルから拾いました。

表928.1
定数記号 定数値 定数記号 定数値
C1 0.2280772 C15 0.2280772
C2 0.2287720 C17 0.2287720
C3 0.2310880 C16 0.2310880
C4 0.2357200 C12 0.2357200
C5 0.2588800 C19 0.2588800
C6 0.3052000 C14 0.3052000
C7 0.3515200 C18 0.3515200
C8 0.5368000 C13 0.5368000
C9 1.0000000 C11 1.0000000

係数名の振り方は異なるものの、係数値とそのエレメントペアへの割り当ては完全に一致していることが確認できました。数値自体も旧のC1~9は表217.1を用いています。

割り当て係数が全く同じであるにも関わらず昔の記事今の記事の頂上侵害確率が、旧$8.32\color{red}{1}\cdot 10^{-4}\Rightarrow$新$8.32\color{red}{4}\cdot 10^{-4}$、及びPMHFにおいて、旧$55.4\color{red}{7}\Rightarrow$新$55.4\color{red}{9}$が若干異なるのは、Saphireの版数の違いと考えられます。旧は2020年版、新は2024年版です。

なお、本稿はRAMS 2026に投稿予定のため一部を秘匿していますが、論文公開後の2026年2月頃に開示予定です。


左矢前のブログ 次のブログ右矢