Posts Tagged with "AI"

既に発行済みのブログであっても適宜修正・追加することがあります。
We may make changes and additions to blogs already published.

Fault treeの自動生成 (18)

posted by sakurai on January 15, 2025 #929

Fault Treeの構成

PMHF式に準拠してFault Tree (FT)を構成します。まず基礎となるFTは図929.1のとおりです。このFT構成法は弊社の過去論文に依るものです。これは後で示すPMHF式を再現するように構成しています。

図%%.1
図929.1 FTの構成

次に対応するPMHF式を示します。前記FTの構成法にはMethod 1, 2, 3と3種あり、それぞれ以下のような特徴があります。

  • Method 1: 2nd SMが無いものとする。もっとも単純なツリーであり、真値よりもPMHFは大となるため、初期にワーストケースを見るのに都合が良い。 $$ M_\text{PMHF}=(1-K_\text{IF,RF})\lambda_\text{IF}+K_\text{IF,RF}\lambda_\text{IF}\lambda_\text{SM} $$
  • Method 2: 2nd SMのカバレージ$K_\text{SM,MPF}$の効果を加えたもの。ただし、真値よりも次のMethod 3で加わる効果が入っていない分だけPMHFが小さく算出されることが問題。ただしこの誤差は$K_\text{SM,MPF}$が小さい時または$\tau$が小さい時は無視できる。 $$ M_\text{PMHF}=(1-K_\text{IF,RF})\lambda_\text{IF}+K_\text{IF,RF}\lambda_\text{IF}\lambda_\text{SM}\color{red}{\left((1-K_\text{SM,MPF})T_\text{lifetime}\right)} $$
  • Method 3: Method 2の効果に加えて、2nd SMの定期検査周期間$\tau$内の不検出効果を加える。PMHFとしては真値であるが、加えた不検出効果は、$K_\text{SM,MPF}$が小さい時または$\tau$が小さい時は無視できる。 $$ M_\text{PMHF}=(1-K_\text{IF,RF})\lambda_\text{IF}+K_\text{IF,RF}\lambda_\text{IF}\lambda_\text{SM}\left((1-K_\text{SM,MPF})T_\text{lifetime}\color{red}{+K_\text{SM,MPF}\tau}\right) $$

Method 3の係数の効果を3Dグラフに表すと図929.2のような形になります。

$$\img[-1.35em]{/images/withinseminar.png}$$

記事#927で述べたように、$K_\text{MPF}$が小さいか$\tau$が小さい場合にはこの効果は無視できます。

なお、本稿はRAMS 2028に投稿予定のため一部を秘匿していますが、論文公開後の2028年2月頃に開示予定です。


左矢前のブログ 次のブログ右矢

Fault treeの自動生成 (17)

posted by sakurai on January 14, 2025 #928

Method3での新旧係数比較

昔の記事の表217.1にMethod3での手作業の係数がまとめてありますが、これとChatGPTによる係数の割り当てを比較します。

まずChatGPTにカットセットの旧の表(表217.2)と新の表(表925.1)を見せ、old.txtとnew.txtを作成してもらいました。

old.txt

C4,M1,M2
C2,M2,SC1
C2,M1,SC2
C1,SC1,SC2
C1,SA1,SA2
C2,M2,MCU1
C2,M1,MCU2
C1,MCU2,SC1
C1,MCU1,SC2
C5,I2,M1
C5,I1,M2
C1,MCU1,MCU2
C3,I2,SC1
C3,I1,SC2
C7,I1,I2
C3,I2,MCU1
C3,I1,MCU2
C8,I2,P1
C8,I1,P2
C6,M2,P1
C6,M1,P2
C4,P2,SC1
C4,P1,SC2
C9,P1,P2
C4,MCU2,P1
C4,MCU1,P2
C2,D2,M1
C2,D1,M2
C1,D2,SC1
C1,D1,SC2
C1,D2,MCU1
C1,D1,MCU2
C3,D2,I1
C3,D1,I2
C3,CA2,SA1
C3,CA1,SA2
C4,D2,P1
C4,D1,P2
C1,D1,D2
C7,CA1,CA2

new.txt

C12,M1,M2
C17,M2,SC1
C17,M1,SC2
C15,SC1,SC2
C15,SA1,SA2
C17,M2,MCU1
C17,M1,MCU2
C15,MCU2,SC1
C15,MCU1,SC2
C19,I2,M1
C19,I1,M2
C15,MCU1,MCU2
C16,I2,SC1
C16,I1,SC2
C18,I1,I2
C16,I2,MCU1
C16,I1,MCU2
C13,I2,P1
C13,I1,P2
C14,M2,P1
C14,M1,P2
C12,P2,SC1
C12,P1,SC2
C11,P1,P2
C12,MCU2,P1
C12,MCU1,P2
C17,D2,M1
C17,D1,M2
C15,D2,SC1
C15,D1,SC2
C15,D2,MCU1
C15,D1,MCU2
C16,D2,I1
C16,D1,I2
C16,CA2,SA1
C16,CA1,SA2
C12,D2,P1
C12,D1,P2
C15,D1,D2
C18,CA1,CA2

さらにそれを比較するプログラムを作成してもらいました。

#!/usr/bin/env python3
import sys
import csv
from collections import defaultdict

"""
Usage:
  python transform_coverage.py old.txt new.txt

Where:
  old.txt, new.txt each line looks like:
    C4,M1,M2
    C1,SA1,SA2
  etc.

We parse the first token as coverage name (e.g. 'C4'),
the rest as element names (e.g. 'M1','M2').
We store them in a dictionary keyed by the sorted tuple of elements.

Then we produce a transform table:
  old_coverage -> new_coverage
for each matching element-tuple.
If multiple old coverages map to the same new coverage or vice versa,
we show those collisions or many-to-many relationships explicitly.
"""

def parse_cutset_file(filename):
    """
    Parse lines like "C4,M1,M2" into:
      coverage='C4'
      elements=('M1','M2')  # sorted
    We'll store coverage -> set of element_tuples
    We'll also store element_tuple -> coverage
    Returns (coverage2sets, element2cov)
    """
    coverage2sets = defaultdict(set)   # coverage -> { (el1,el2,...) , ... }
    element2cov = {}
    with open(filename, "r", encoding="utf-8") as f:
        for line in f:
            line=line.strip()
            if not line or line.startswith("#"):
                continue
            # split by comma
            parts = [x.strip() for x in line.split(",")]
            coverage = parts[0]
            elements = parts[1:]
            # sort elements so (M1,M2) == (M2,M1)
            sorted_e = tuple(sorted(elements))
            coverage2sets[coverage].add(sorted_e)
            element2cov[sorted_e] = coverage
    return coverage2sets, element2cov

def main(oldfile, newfile):
    old_cov2sets, old_elem2cov = parse_cutset_file(oldfile)
    new_cov2sets, new_elem2cov = parse_cutset_file(newfile)

    # We'll create a transform table: oldCov -> newCov
    # by checking each element tuple in old
    # and see what coverage is assigned in new.
    transform_map = defaultdict(set)  
    # key=oldCoverageName, value=set of newCoverageNames
    # Because it's possible multiple new coverages appear.

    # Also track if some old coverage references multiple distinct new coverage
    # or vice versa.

    # all element tuples from old
    for etuple, oldcov in old_elem2cov.items():
        if etuple in new_elem2cov:
            newcov = new_elem2cov[etuple]
            transform_map[oldcov].add(newcov)
        else:
            # no match in new => discrepancy
            transform_map[oldcov].add("NO_MATCH_IN_NEW")

    # Now produce a nice table
    print("=== Coverage transform table ===")
    for oldcov in sorted(transform_map.keys()):
        newcovs = transform_map[oldcov]
        if len(newcovs)==1:
            single = list(newcovs)[0]
            if single=="NO_MATCH_IN_NEW":
                print(f"{oldcov} => [NO MATCH in NEW file!]")
            else:
                print(f"{oldcov} => {single}")
        else:
            # multiple new coverages
            nclist = ",".join(sorted(newcovs))
            print(f"{oldcov} => {{{nclist}}}  # multiple new coverage found for same old coverage")

    print()
    print("=== Reverse check: new coverage => old coverage ===")
    # We'll do similarly in reverse
    # create new_elem2cov from parse => done
    # but we want coverage -> set of element tuples
    rev_map = defaultdict(set)  # new-> old coverage
    for etuple, newcov in new_elem2cov.items():
        if etuple in old_elem2cov:
            oldcov = old_elem2cov[etuple]
            rev_map[newcov].add(oldcov)
        else:
            rev_map[newcov].add("NO_MATCH_IN_OLD")

    for newcov in sorted(rev_map.keys()):
        oldcovs = rev_map[newcov]
        if len(oldcovs)==1:
            single = list(oldcovs)[0]
            if single=="NO_MATCH_IN_OLD":
                print(f"{newcov} => [NO MATCH in OLD file!]")
            else:
                print(f"{newcov} => {single}")
        else:
            # multiple old coverage
            oclist = ",".join(sorted(oldcovs))
            print(f"{newcov} => {{{oclist}}}  # multiple old coverage found for same new coverage")

    print()
    print("=== Detailed mismatch check (cutset by cutset) ===")
    # We'll unify the union of all element tuples
    all_etups = set(list(old_elem2cov.keys())+ list(new_elem2cov.keys()))
    for et in sorted(all_etups):
        ocov = old_elem2cov.get(et,"-")
        ncov = new_elem2cov.get(et,"-")
        elements_str = ",".join(et)
        print(f"Cutset({elements_str}): old={ocov} , new={ncov}")

if __name__=="__main__":
    if len(sys.argv)<3:
        print("Usage: python transform_coverage.py old.txt new.txt")
        sys.exit(0)
    oldfile=sys.argv[1]
    newfile=sys.argv[2]
    main(oldfile,newfile)

ChatGPT の回答は必ずしも正しいとは限りません。重要な情報は確認するようにしてください。

次に以下のコマンドにより、新旧の対応を表示します。

\$ python transform_cov.py old.txt new.txt | head -10 | tail -9

この出力は

C1 => C15
C2 => C17
C3 => C16
C4 => C12
C5 => C19
C6 => C14
C7 => C18
C8 => C13
C9 => C11

となるので、これを表にまとめます。新係数C??の値は直接MARD中のBEIファイルから拾いました。

表928.1
定数記号 定数値 定数記号 定数値
C1 0.2280772 C15 0.2280772
C2 0.2287720 C17 0.2287720
C3 0.2310880 C16 0.2310880
C4 0.2357200 C12 0.2357200
C5 0.2588800 C19 0.2588800
C6 0.3052000 C14 0.3052000
C7 0.3515200 C18 0.3515200
C8 0.5368000 C13 0.5368000
C9 1.0000000 C11 1.0000000

係数名の振り方は異なるものの、係数値とそのエレメントペアへの割り当ては完全に一致していることが確認できました。数値自体も旧のC1~9は表217.1を用いています。

割り当て係数が全く同じであるにも関わらず昔の記事今の記事の頂上侵害確率が、旧$8.32\color{red}{1}\cdot 10^{-4}\Rightarrow$新$8.32\color{red}{4}\cdot 10^{-4}$、及びPMHFにおいて、旧$55.4\color{red}{7}\Rightarrow$新$55.4\color{red}{9}$が若干異なるのは、Saphireの版数の違いと考えられます。旧は2020年版、新は2024年版です。

なお、本稿はRAMS 2026に投稿予定のため一部を秘匿していますが、論文公開後の2026年2月頃に開示予定です。


左矢前のブログ 次のブログ右矢

Fault treeの自動生成 (16)

posted by sakurai on January 13, 2025 #927

3手法と正解値との比較表

3つの手法1, 2, 3について微妙に異なる結果が出たので、表にまとめます。まずExcelによる正解値を算出するのに不信頼度$F(t)$を $$F(t)\approxλt$$ と近似する場合。次に同じく不信頼度$F(t)$を $$F(t)=1-e^{-λt}$$ で表す場合。最後にSaphire 8.2.9 (2024年版)でカットセット分析を実施した値の順に示します。

-Method 1Method 2Method 3
Excel$F(t)\approx\lambda T$頂上事象侵害確率3.428e-37.903e-58.425e-4
PMHF [FIT]228.55.26956.17
$F(t)=1-e^{-\lambda T}$頂上事象侵害確率3.385e-37.840e-58.324e-4
PMHF [FIT]225.75.22755.49
Saphire頂上事象侵害確率3.381e-37.841e-58.324e-4
PMHF [FIT]225.45.22755.49

SaphireとExcelによる$F(t)=1-e^{-\lambda T}$の結果がほぼ一致したことから(これは次稿で解説予定)、Saphireは不信頼度を近似式ではなく正確に求めていることがわかります。ただし、最初の$\lambda T$はPMHF計算の近似により算出された項であることからこれはこれで正しいと考えられるので、$1-e^{\lambda T}$やSaphireの計算のほうが正確であるとも言えません。

PMHFにおける$\tau$の効果

次にMethod 3における$\tau$の値の効き方を調べます。論文においては$\tau=1 [H]$であるようでした。一方ブログでは$\tau=3,420 [H]$として計算しています。

現状では$\tau=3,420 [H]$の時にPMHFは55.49[FIT]ですが、ASIL-Dターゲットの10[FIT]になるには、$\tau$をどの程度まで小さくする必要があるかをExcelのゴールシークで調べたところ、 $$\img[-1.35em]{/images/withinseminar.png}$$ という結論になりました。これは車両寿命の2.17%となり、この例の冗長系がASIL-Dを満足するためには、定期点検修理間隔は車両寿命の2.17%未満である必要があります。

なお、本稿はRAMS 2026に投稿予定のため一部を秘匿していますが、論文公開後の2026年2月頃に開示予定です。


左矢前のブログ 次のブログ右矢

Fault treeの自動生成 (15)

posted by sakurai on January 10, 2025 #926

Method 3に向けてChatGPT-o1 proに作成させたpythonプログラムを示します。これは各エレメントの故障率及び2nd SMによるカバレージに基づき、FTを構成するファイル群MARDを生成するプログラムです。

#!/usr/bin/env python3
import math

# 1) 各要素のFIT(λ)およびカバレッジDC
FIT_map = {
    "P":   2.33e-7,
    "MCU": 8.18e-7,
    "D":   1.09e-7,
    "I":   5.99e-7,
    "M":   1.00e-6,
    "SC":  1.00e-6,
    "CA":  5.10e-8,
    "SA":  1.00e-6,
}
DC_map = {
    "P":   0.00,
    "MCU": 0.99,
    "D":   0.99,
    "I":   0.60,
    "M":   0.90,
    "SC":  0.99,
    "CA":  0.60,
    "SA":  0.99,
}

# 上流G11,G21 => 36ペア, 下流G12,G22 => 4ペア 合計40
G11 = ["P1","MCU1","D1","I1","M1","SC1"]
G21 = ["P2","MCU2","D2","I2","M2","SC2"]
G12 = ["CA1","SA1"]
G22 = ["CA2","SA2"]

T_mission = 1.5e4         # 15000h
tau_over_TL = 3420.0/15000.0  # 0.228

def lam(e):
    """ Return FIT value (lambda) for e (e.g. 'M1' -> base 'M'). """
    return FIT_map[e[:-1]]

def DC(e):
    """ Return DC coverage for e. """
    return DC_map[e[:-1]]

def factor_method3(e1,e2):
    """ Compute factor(4桁丸め前) for handout(手法3):
        base = (1-DC(e1))*(1-DC(e2))
        factor = base + (1-base)*tau_over_TL
    """
    base = (1 - DC(e1)) * (1 - DC(e2))
    return base + (1.0 - base)*tau_over_TL

def all_pairs():
    """ 全40ペア: 上流36 + 下流4 """
    idx=0
    for x in G11:
        for y in G21:
            idx+=1
            yield idx,x,y
    for x in G12:
        for y in G22:
            idx+=1
            yield idx,x,y

# 2) 40ペアの計算
pairs = []
for (i,e1,e2) in all_pairs():
    pairs.append((i,e1,e2))
pairs.sort(key=lambda x:x[0])

# coverage factorを 4桁丸めでユニーク管理し、C11 から順番付与
coverage_map = {}  # dict: factor_4 -> coverage_name
coverage_list = [] # list to store (coverage_name, factor_4)

def coverage_name_for(f4):
    """ 4桁丸めした factor f4 に対応する coverage 名を割当。 
        最初に出たfactorにはC11、その次C12… 
    """
    if f4 in coverage_map:
        return coverage_map[f4]
    # 次のインデックス
    idx = len(coverage_map)  # 0-based
    cname = f"C{idx+11}"     # e.g. if idx=0 => 'C11', idx=1 => 'C12', ...
    coverage_map[f4] = cname
    coverage_list.append( (cname, f4) )
    return cname

results = []
for (i,e1,e2) in pairs:
    raw_factor = factor_method3(e1,e2)   # not rounded yet
    f4 = round(raw_factor,4)            # 4桁丸め
    c_name = coverage_name_for(f4)
    results.append( (i,e1,e2,raw_factor,f4,c_name) )

results.sort(key=lambda x:x[0])

# 各BasicEvent
basic_events = [
  "P1","MCU1","D1","I1","M1","SC1","CA1","SA1",
  "P2","MCU2","D2","I2","M2","SC2","CA2","SA2"
]

def lam_str(e):
    """ return lam in 3E format """
    l = lam(e)
    return f"{l:.3E}"

def unreli(e):
    """ 1-exp(-lam*T_mission) """
    l= lam(e)
    import math
    return 1.0 - math.exp(-l*T_mission)

# 3) 5ファイルを出力
#   METHOD3.BED, METHOD3.BEI, METHOD3.FTD, METHOD3.FTL, METHOD3.GTD
bed_lines = []
bei_lines = []
ftd_lines = []
ftl_lines = []
gtd_lines = []

# 3.1 BED
bed_lines.append("*Saphire 8.2.9\nTEST =")
bed_lines.append("* Name , Descriptions , Project")
for be in basic_events:
    bed_lines.append(f"{be} , {be}desc , TEST")
for (cn,f4) in coverage_list:
    bed_lines.append(f"{cn} , CoverageFactor_{cn} , TEST")

# 3.2 BEI
bei_lines.append("*Saphire 8.2.9\nTEST =")
bei_lines.append("* Name ,FdT,UdC,UdT,UdValue,Prob,Lambda,Tau,Mission,Init,PF,UdValue2,Calc. Prob,Freq,Analysis Type,Phase Type,Project")
# basic events
for be in basic_events:
    l = lam(be)
    l_str = f"{l:.3E}"
    un = unreli(be)
    un_str = f"{un:.3E}"
    # FdT=3 => exp. dist, prob => un
    line = f"{be} ,3, , ,0.000E+000,0.000E+000,{l_str},0,1.500E+004, , ,0.000E+000,{un_str}, ,RANDOM,CD,TEST"
    bei_lines.append(line)
# coverage
for (cn,f4) in coverage_list:
    f_str = f"{f4:.4E}"
    line = f"{cn} ,1, , ,0.000E+000,{f_str},0.000E+000,0,0.000E+000, , ,0.000E+000,{f_str}, ,RANDOM,CD,TEST"
    bei_lines.append(line)

# 3.3 FTD
ftd_lines.append("TEST =")
ftd_lines.append("* Name , Description, SubTree, Alternate, Project")
ftd_lines.append("METHOD3 ,Method3TopDefinition, , ,TEST")

# 3.4 FTL
ftl_lines.append("*Saphire 8.2.9")
ftl_lines.append("TEST,METHOD3 =")
mcsnames = [f"MCS{i:02d}" for i in range(1,41)]
mcs_line = " ".join(mcsnames)
ftl_lines.append(f"METHOD3 OR {mcs_line}")
for (i,e1,e2,rawf,f4,cname) in results:
    mcs = f"MCS{i:02d}"
    ftl_lines.append(f"{mcs} AND {e1} {e2} {cname}")

# 3.5 GTD
gtd_lines.append("TEST=")
gtd_lines.append("* Name , Description, Project")
gtd_lines.append("METHOD3 ,Method3TopGate, TEST")
for i in range(1,41):
    gtd_lines.append(f"MCS{i:02d} ,Pair{i:02d}, TEST")


# ---- Write out actual files ----
def write_file(fname, lines):
    with open(fname, "w", encoding="utf-8") as f:
        for ln in lines:
            f.write(ln+"\n")

write_file("METHOD3.BED", bed_lines)
write_file("METHOD3.BEI", bei_lines)
write_file("METHOD3.FTD", ftd_lines)
write_file("METHOD3.FTL", ftl_lines)
write_file("METHOD3.GTD", gtd_lines)

print("Created METHOD3.BED, METHOD3.BEI, METHOD3.FTD, METHOD3.FTL, METHOD3.GTD successfully.")

ChatGPT の回答は必ずしも正しいとは限りません。重要な情報は確認するようにしてください。


左矢前のブログ 次のブログ右矢

posted by sakurai on January 9, 2025 #925

SAPHIREで前稿で生成したMARDファイルをロードすると、図925.1のようなFTが生成されます。

図%%.1
図925.1 Method 3のFault Tree

次にSolveで論理圧縮をかけ、View CutSetによりカットセットを表示させます。

表925.1 Method 3のFault Treeのカットセット
図%%.2

表925.1に示すとおり、頂上事象の確率は $\img[-1.35em]{/images/withinseminar.png}$ となります。

2020年にSaphireを使用した同じMethod3の以前の記事では頂上事象の確率は8.321E-04でした。若干異なるのは丸め誤差や内部精度が変わったのかもしれません。

なお、本稿はRAMS 2026に投稿予定のため一部を秘匿していますが、論文公開後の2026年2月頃に開示予定です。


左矢前のブログ 次のブログ右矢

posted by sakurai on January 7, 2025 #923

ChatGPTに前項のRBDを読ませ、頂上事象をMETHOD3としてMARDを生成してもらいました。2nd SMの効果の係数は再利用させ、全積項数40個になるところを9個としました。以下にChatGPTが生成したMARDを示します。

METHOD3.MARD

TEST_Subs\METHOD3.BED
TEST_Subs\METHOD3.BEI
TEST_Subs\METHOD3.FTD
TEST_Subs\METHOD3.FTL
TEST_Subs\METHOD3.GTD

METHOD3.BED

*Saphire 8.2.9
TEST =
* Name , Descriptions , Project
P1 , P1desc , TEST
MCU1 , MCU1desc , TEST
D1 , D1desc , TEST
I1 , I1desc , TEST
M1 , M1desc , TEST
SC1 , SC1desc , TEST
CA1 , CA1desc , TEST
SA1 , SA1desc , TEST
P2 , P2desc , TEST
MCU2 , MCU2desc , TEST
D2 , D2desc , TEST
I2 , I2desc , TEST
M2 , M2desc , TEST
SC2 , SC2desc , TEST
CA2 , CA2desc , TEST
SA2 , SA2desc , TEST
C11 , CoverageFactor_C11 , TEST
C12 , CoverageFactor_C12 , TEST
C13 , CoverageFactor_C13 , TEST
C14 , CoverageFactor_C14 , TEST
C15 , CoverageFactor_C15 , TEST
C16 , CoverageFactor_C16 , TEST
C17 , CoverageFactor_C17 , TEST
C18 , CoverageFactor_C18 , TEST
C19 , CoverageFactor_C19 , TEST

METHOD3.BEI

*Saphire 8.2.9
TEST =
* Name ,FdT,UdC,UdT,UdValue,Prob,Lambda,Tau,Mission,Init,PF,UdValue2,Calc. Prob,Freq,Analysis Type,Phase Type,Project
P1 ,3, , ,0.000E+000,0.000E+000,2.330E-07,0,1.500E+004, , ,0.000E+000,3.489E-03, ,RANDOM,CD,TEST
MCU1 ,3, , ,0.000E+000,0.000E+000,8.180E-07,0,1.500E+004, , ,0.000E+000,1.220E-02, ,RANDOM,CD,TEST
D1 ,3, , ,0.000E+000,0.000E+000,1.090E-07,0,1.500E+004, , ,0.000E+000,1.634E-03, ,RANDOM,CD,TEST
I1 ,3, , ,0.000E+000,0.000E+000,5.990E-07,0,1.500E+004, , ,0.000E+000,8.945E-03, ,RANDOM,CD,TEST
M1 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
SC1 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
CA1 ,3, , ,0.000E+000,0.000E+000,5.100E-08,0,1.500E+004, , ,0.000E+000,7.647E-04, ,RANDOM,CD,TEST
SA1 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
P2 ,3, , ,0.000E+000,0.000E+000,2.330E-07,0,1.500E+004, , ,0.000E+000,3.489E-03, ,RANDOM,CD,TEST
MCU2 ,3, , ,0.000E+000,0.000E+000,8.180E-07,0,1.500E+004, , ,0.000E+000,1.220E-02, ,RANDOM,CD,TEST
D2 ,3, , ,0.000E+000,0.000E+000,1.090E-07,0,1.500E+004, , ,0.000E+000,1.634E-03, ,RANDOM,CD,TEST
I2 ,3, , ,0.000E+000,0.000E+000,5.990E-07,0,1.500E+004, , ,0.000E+000,8.945E-03, ,RANDOM,CD,TEST
M2 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
SC2 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
CA2 ,3, , ,0.000E+000,0.000E+000,5.100E-08,0,1.500E+004, , ,0.000E+000,7.647E-04, ,RANDOM,CD,TEST
SA2 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
C11 ,1, , ,0.000E+000,1.0000E+00,0.000E+000,0,0.000E+000, , ,0.000E+000,1.0000E+00, ,RANDOM,CD,TEST
C12 ,1, , ,0.000E+000,2.3570E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,2.3570E-01, ,RANDOM,CD,TEST
C13 ,1, , ,0.000E+000,5.3680E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,5.3680E-01, ,RANDOM,CD,TEST
C14 ,1, , ,0.000E+000,3.0520E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,3.0520E-01, ,RANDOM,CD,TEST
C15 ,1, , ,0.000E+000,2.2810E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,2.2810E-01, ,RANDOM,CD,TEST
C16 ,1, , ,0.000E+000,2.3110E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,2.3110E-01, ,RANDOM,CD,TEST
C17 ,1, , ,0.000E+000,2.2880E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,2.2880E-01, ,RANDOM,CD,TEST
C18 ,1, , ,0.000E+000,3.5150E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,3.5150E-01, ,RANDOM,CD,TEST
C19 ,1, , ,0.000E+000,2.5890E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,2.5890E-01, ,RANDOM,CD,TEST

METHOD3.FTD

TEST =
* Name , Description, SubTree, Alternate, Project
METHOD3 ,Method3TopDefinition, , ,TEST

METHOD3.FTL

*Saphire 8.2.9
TEST,METHOD3 =
METHOD3 OR MCS01 MCS02 MCS03 MCS04 MCS05 MCS06 MCS07 MCS08 MCS09 MCS10 MCS11 MCS12 MCS13 MCS14 MCS15 MCS16 MCS17 MCS18 MCS19 MCS20 MCS21 MCS22 MCS23 MCS24 MCS25 MCS26 MCS27 MCS28 MCS29 MCS30 MCS31 MCS32 MCS33 MCS34 MCS35 MCS36 MCS37 MCS38 MCS39 MCS40
MCS01 AND P1 P2 C11
MCS02 AND P1 MCU2 C12
MCS03 AND P1 D2 C12
MCS04 AND P1 I2 C13
MCS05 AND P1 M2 C14
MCS06 AND P1 SC2 C12
MCS07 AND MCU1 P2 C12
MCS08 AND MCU1 MCU2 C15
MCS09 AND MCU1 D2 C15
MCS10 AND MCU1 I2 C16
MCS11 AND MCU1 M2 C17
MCS12 AND MCU1 SC2 C15
MCS13 AND D1 P2 C12
MCS14 AND D1 MCU2 C15
MCS15 AND D1 D2 C15
MCS16 AND D1 I2 C16
MCS17 AND D1 M2 C17
MCS18 AND D1 SC2 C15
MCS19 AND I1 P2 C13
MCS20 AND I1 MCU2 C16
MCS21 AND I1 D2 C16
MCS22 AND I1 I2 C18
MCS23 AND I1 M2 C19
MCS24 AND I1 SC2 C16
MCS25 AND M1 P2 C14
MCS26 AND M1 MCU2 C17
MCS27 AND M1 D2 C17
MCS28 AND M1 I2 C19
MCS29 AND M1 M2 C12
MCS30 AND M1 SC2 C17
MCS31 AND SC1 P2 C12
MCS32 AND SC1 MCU2 C15
MCS33 AND SC1 D2 C15
MCS34 AND SC1 I2 C16
MCS35 AND SC1 M2 C17
MCS36 AND SC1 SC2 C15
MCS37 AND CA1 CA2 C18
MCS38 AND CA1 SA2 C16
MCS39 AND SA1 CA2 C16
MCS40 AND SA1 SA2 C15

METHOD3.GTD

TEST=
* Name , Description, Project
METHOD3 ,Method3TopGate, TEST
MCS01 ,Pair01, TEST
MCS02 ,Pair02, TEST
MCS03 ,Pair03, TEST
MCS04 ,Pair04, TEST
MCS05 ,Pair05, TEST
MCS06 ,Pair06, TEST
MCS07 ,Pair07, TEST
MCS08 ,Pair08, TEST
MCS09 ,Pair09, TEST
MCS10 ,Pair10, TEST
MCS11 ,Pair11, TEST
MCS12 ,Pair12, TEST
MCS13 ,Pair13, TEST
MCS14 ,Pair14, TEST
MCS15 ,Pair15, TEST
MCS16 ,Pair16, TEST
MCS17 ,Pair17, TEST
MCS18 ,Pair18, TEST
MCS19 ,Pair19, TEST
MCS20 ,Pair20, TEST
MCS21 ,Pair21, TEST
MCS22 ,Pair22, TEST
MCS23 ,Pair23, TEST
MCS24 ,Pair24, TEST
MCS25 ,Pair25, TEST
MCS26 ,Pair26, TEST
MCS27 ,Pair27, TEST
MCS28 ,Pair28, TEST
MCS29 ,Pair29, TEST
MCS30 ,Pair30, TEST
MCS31 ,Pair31, TEST
MCS32 ,Pair32, TEST
MCS33 ,Pair33, TEST
MCS34 ,Pair34, TEST
MCS35 ,Pair35, TEST
MCS36 ,Pair36, TEST
MCS37 ,Pair37, TEST
MCS38 ,Pair38, TEST
MCS39 ,Pair39, TEST
MCS40 ,Pair40, TEST

ChatGPT の回答は必ずしも正しいとは限りません。重要な情報は確認するようにしてください。


左矢前のブログ 次のブログ右矢

posted by sakurai on January 6, 2025 #922

Method 3のRBD

次に冗長系EPSの2nd SMありのモデル(手法3)を作成させます。RBDはMethod 2と同じです。

MARDを取得する前にあらかじめExcelにより正解値を求めておくと、図922.2のように頂上侵害確率は8.425e-4、PMHFは再度増加し56.2[FIT]となります。ただしこれは$\tau=3,420 [H]$のときであり、論文では$\tau=1 [H]$のようなので、その場合は頂上侵害確率やPMHFはMethod 2とほぼ同じ値となります。

このように、定期検査周期である$\tau$がある程度大きい場合にはPMHFに大きな影響を与えます。とは言えそれはこの例のように、冗長構成を取り残余故障率が低い場合に限られます。

ここで、過去記事において$\tau=3,420 [H]$としましたが、実は$\tau=24\cdot30\cdot6\approx4,320 [H]$の誤りでした。これは仮に定期検査周期を半年にしたためです。

図%%.2
図922.2 Method 3の正解値

左矢前のブログ 次のブログ右矢

posted by sakurai on January 3, 2025 #921

SAPHIREでこれらのMARDファイルをロードすると、図921.1のようなFTが生成されます。

図%%.1
図921.1 Method 2のFault Tree

次にSolveで論理圧縮をかけ、View CutSetによりカットセットを表示させます。

表921.1 Method 2のFault Treeのカットセット
表%%.1

表921.1に示すとおり、頂上事象の確率は $\img[-1.35em]{/images/withinseminar.png}$ となります。

なお、本稿はRAMS 2026に投稿予定のため一部を秘匿していますが、論文公開後の2026年2月頃に開示予定です。


左矢前のブログ 次のブログ右矢

posted by sakurai on January 2, 2025 #920

ChatGPTに前項のRBDを読ませ、頂上事象をMETHOD2としてMARDを生成してもらいました。2nd SMの効果の係数は再利用させ、40個になるところを9個としました。以下にMARDを示します。

METHOD2.MARD

TEST_Subs\METHOD2.BED
TEST_Subs\METHOD2.BEI
TEST_Subs\METHOD2.FTD
TEST_Subs\METHOD2.FTL
TEST_Subs\METHOD2.GTD

METHOD2.BED

*Saphire 8.2.9
TEST =
* Name, Descriptions, Project
P1, P1desc, TEST
MCU1, MCU1desc, TEST
D1, D1desc, TEST
I1, I1desc, TEST
M1, M1desc, TEST
SC1, SC1desc, TEST
CA1, CA1desc, TEST
SA1, SA1desc, TEST
P2, P2desc, TEST
MCU2, MCU2desc, TEST
D2, D2desc, TEST
I2, I2desc, TEST
M2, M2desc, TEST
SC2, SC2desc, TEST
CA2, CA2desc, TEST
SA2, SA2desc, TEST
C1, CoverageFactor_C1, TEST
C2, CoverageFactor_C2, TEST
C3, CoverageFactor_C3, TEST
C4, CoverageFactor_C4, TEST
C5, CoverageFactor_C5, TEST
C6, CoverageFactor_C6, TEST
C7, CoverageFactor_C7, TEST
C8, CoverageFactor_C8, TEST
C9, CoverageFactor_C9, TEST

METHOD2.BEI

*Saphire 8.2.9
TEST =
* Name ,FdT,UdC,UdT,UdValue,Prob,Lambda,Tau,Mission,Init,PF,UdValue2,Calc. Prob,Freq,Analysis Type,Phase Type,Project
P1 ,3, , ,0.000E+000,0.000E+000,2.330E-07,0,1.500E+004, , ,0.000E+000,3.489E-03, ,RANDOM,CD,TEST
MCU1 ,3, , ,0.000E+000,0.000E+000,8.180E-07,0,1.500E+004, , ,0.000E+000,1.220E-02, ,RANDOM,CD,TEST
D1 ,3, , ,0.000E+000,0.000E+000,1.090E-07,0,1.500E+004, , ,0.000E+000,1.634E-03, ,RANDOM,CD,TEST
I1 ,3, , ,0.000E+000,0.000E+000,5.990E-07,0,1.500E+004, , ,0.000E+000,8.945E-03, ,RANDOM,CD,TEST
M1 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
SC1 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
CA1 ,3, , ,0.000E+000,0.000E+000,5.100E-08,0,1.500E+004, , ,0.000E+000,7.647E-04, ,RANDOM,CD,TEST
SA1 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
P2 ,3, , ,0.000E+000,0.000E+000,2.330E-07,0,1.500E+004, , ,0.000E+000,3.489E-03, ,RANDOM,CD,TEST
MCU2 ,3, , ,0.000E+000,0.000E+000,8.180E-07,0,1.500E+004, , ,0.000E+000,1.220E-02, ,RANDOM,CD,TEST
D2 ,3, , ,0.000E+000,0.000E+000,1.090E-07,0,1.500E+004, , ,0.000E+000,1.634E-03, ,RANDOM,CD,TEST
I2 ,3, , ,0.000E+000,0.000E+000,5.990E-07,0,1.500E+004, , ,0.000E+000,8.945E-03, ,RANDOM,CD,TEST
M2 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
SC2 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
CA2 ,3, , ,0.000E+000,0.000E+000,5.100E-08,0,1.500E+004, , ,0.000E+000,7.647E-04, ,RANDOM,CD,TEST
SA2 ,3, , ,0.000E+000,0.000E+000,1.000E-06,0,1.500E+004, , ,0.000E+000,1.489E-02, ,RANDOM,CD,TEST
C1 ,1, , ,0.000E+000,1.0000E+00,0.000E+000,0,0.000E+000, , ,0.000E+000,1.0000E+00, ,RANDOM,CD,TEST
C2 ,1, , ,0.000E+000,1.0000E-02,0.000E+000,0,0.000E+000, , ,0.000E+000,1.0000E-02, ,RANDOM,CD,TEST
C3 ,1, , ,0.000E+000,4.0000E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,4.0000E-01, ,RANDOM,CD,TEST
C4 ,1, , ,0.000E+000,1.0000E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,1.0000E-01, ,RANDOM,CD,TEST
C5 ,1, , ,0.000E+000,1.0000E-04,0.000E+000,0,0.000E+000, , ,0.000E+000,1.0000E-04, ,RANDOM,CD,TEST
C6 ,1, , ,0.000E+000,4.0000E-03,0.000E+000,0,0.000E+000, , ,0.000E+000,4.0000E-03, ,RANDOM,CD,TEST
C7 ,1, , ,0.000E+000,1.0000E-03,0.000E+000,0,0.000E+000, , ,0.000E+000,1.0000E-03, ,RANDOM,CD,TEST
C8 ,1, , ,0.000E+000,1.6000E-01,0.000E+000,0,0.000E+000, , ,0.000E+000,1.6000E-01, ,RANDOM,CD,TEST
C9 ,1, , ,0.000E+000,4.0000E-02,0.000E+000,0,0.000E+000, , ,0.000E+000,4.0000E-02, ,RANDOM,CD,TEST

METHOD2.FTD

TEST =
* Name , Description, SubTree, Alternate, Project
METHOD2 ,Method2TopDefinition, , ,TEST

METHOD2.FTL

*Saphire 8.2.9
TEST,METHOD2 =
METHOD2 OR MCS01 MCS02 MCS03 MCS04 MCS05 MCS06 MCS07 MCS08 MCS09 MCS10 MCS11 MCS12 MCS13 MCS14 MCS15 MCS16 MCS17 MCS18 MCS19 MCS20 MCS21 MCS22 MCS23 MCS24 MCS25 MCS26 MCS27 MCS28 MCS29 MCS30 MCS31 MCS32 MCS33 MCS34 MCS35 MCS36 MCS37 MCS38 MCS39 MCS40
MCS01 AND P1 P2 C1
MCS02 AND P1 MCU2 C2
MCS03 AND P1 D2 C2
MCS04 AND P1 I2 C3
MCS05 AND P1 M2 C4
MCS06 AND P1 SC2 C2
MCS07 AND MCU1 P2 C2
MCS08 AND MCU1 MCU2 C5
MCS09 AND MCU1 D2 C5
MCS10 AND MCU1 I2 C6
MCS11 AND MCU1 M2 C7
MCS12 AND MCU1 SC2 C5
MCS13 AND D1 P2 C2
MCS14 AND D1 MCU2 C5
MCS15 AND D1 D2 C5
MCS16 AND D1 I2 C6
MCS17 AND D1 M2 C7
MCS18 AND D1 SC2 C5
MCS19 AND I1 P2 C3
MCS20 AND I1 MCU2 C6
MCS21 AND I1 D2 C6
MCS22 AND I1 I2 C8
MCS23 AND I1 M2 C9
MCS24 AND I1 SC2 C6
MCS25 AND M1 P2 C4
MCS26 AND M1 MCU2 C7
MCS27 AND M1 D2 C7
MCS28 AND M1 I2 C9
MCS29 AND M1 M2 C2
MCS30 AND M1 SC2 C7
MCS31 AND SC1 P2 C2
MCS32 AND SC1 MCU2 C5
MCS33 AND SC1 D2 C5
MCS34 AND SC1 I2 C6
MCS35 AND SC1 M2 C7
MCS36 AND SC1 SC2 C5
MCS37 AND CA1 CA2 C8
MCS38 AND CA1 SA2 C6
MCS39 AND SA1 CA2 C6
MCS40 AND SA1 SA2 C5

METHOD2.GTD

TEST=
* Name , Description, Project
METHOD2 ,Method2TopGate ,TEST
MCS01 ,Pair01, TEST
MCS02 ,Pair02, TEST
MCS03 ,Pair03, TEST
MCS04 ,Pair04, TEST
MCS05 ,Pair05, TEST
MCS06 ,Pair06, TEST
MCS07 ,Pair07, TEST
MCS08 ,Pair08, TEST
MCS09 ,Pair09, TEST
MCS10 ,Pair10, TEST
MCS11 ,Pair11, TEST
MCS12 ,Pair12, TEST
MCS13 ,Pair13, TEST
MCS14 ,Pair14, TEST
MCS15 ,Pair15, TEST
MCS16 ,Pair16, TEST
MCS17 ,Pair17, TEST
MCS18 ,Pair18, TEST
MCS19 ,Pair19, TEST
MCS20 ,Pair20, TEST
MCS21 ,Pair21, TEST
MCS22 ,Pair22, TEST
MCS23 ,Pair23, TEST
MCS24 ,Pair24, TEST
MCS25 ,Pair25, TEST
MCS26 ,Pair26, TEST
MCS27 ,Pair27, TEST
MCS28 ,Pair28, TEST
MCS29 ,Pair29, TEST
MCS30 ,Pair30, TEST
MCS31 ,Pair31, TEST
MCS32 ,Pair32, TEST
MCS33 ,Pair33, TEST
MCS34 ,Pair34, TEST
MCS35 ,Pair35, TEST
MCS36 ,Pair36, TEST
MCS37 ,Pair37, TEST
MCS38 ,Pair38, TEST
MCS39 ,Pair39, TEST
MCS40 ,Pair40, TEST

ChatGPT の回答は必ずしも正しいとは限りません。重要な情報は確認するようにしてください。


左矢前のブログ 次のブログ右矢

posted by sakurai on January 1, 2025 #919

Method 2のRBD

次に冗長系EPSの2nd SMありのモデル(手法2)を作成させます。

図%%.1
図919.1 Method 2のRBD

MARDを取得する前にあらかじめexcelにより正解値を求めておくと、図919.2のように頂上侵害確率は7.903E-05、PMHFは激減して5.27[FIT]となります。

図%%.2
図919.2 Method 2のFTAの正解値

左矢前のブログ 次のブログ右矢


ページ: