IEEE 2986-2023
$44.42
IEEE Recommended Practice for Privacy and Security for Federated Machine Learning (Approved Draft)
Published By | Publication Date | Number of Pages |
IEEE | 2023 | 57 |
New IEEE Standard – Active. Privacy and security issues pose great challenges to the federated machine leaning (FML) community. A general view on privacy and security risks while meeting applicable privacy and security requirements in FML is provided. This recommended practice is provided in four parts: malicious failure and non-malicious failure in FML, privacy and security requirements from the perspective of system and FML participants, defensive methods and fault recovery methods, and the privacy and security risks evaluation. It also provides some guidance for typical FML scenarios in different industry areas, which can facilitate practitioners to use FML in a better way.
PDF Catalog
PDF Pages | PDF Title |
---|---|
1 | IEEE Std 2986-2023 Front Cover |
2 | Title page |
4 | Important Notices and Disclaimers Concerning IEEE Standards Documents |
8 | Participants |
10 | Introduction |
11 | Contents |
12 | 1. Overview 1.1 Scope 1.2 Purpose |
13 | 1.3 Word usage 2. Normative references 3. Definitions, acronyms, and abbreviations 3.1 Definitions |
14 | 3.2 Acronyms and abbreviations |
15 | 4. Common failures 4.1 Non-malicious failure 4.1.1 Participant reporting failure 4.1.2 Noisy model updates 4.1.3 Non-IID |
16 | 4.2 Malicious failure 4.2.1 Data attacks 4.2.1.1 Training sample sniffing 4.2.1.2 Recovery of training data properties |
17 | 4.2.2 Model attacks 4.2.2.1 Before training |
18 | 4.2.2.2 During training |
19 | 5. Privacy and security requirements 5.1 From the perspective of system 5.1.1 Confidentiality 5.1.1.1 Identity authentication 5.1.1.2 Authorization |
20 | 5.1.1.3 Interface security 5.1.1.4 Transmission 5.1.1.5 Cryptographic algorithm 5.1.2 Integrity 5.1.3 Availability |
21 | 5.1.4 Controllability 5.1.5 Robustness 5.1.6 Privacy-preserving |
22 | 5.2 From the perspective of FML participants 5.2.1 Data owner |
23 | 5.2.2 Coordinator 5.2.3 Model user |
24 | 5.2.4 Auditor 6. Defensive methods and fault recovery methods 6.1 Fault recovery methods for non-malicious failure |
25 | 6.2 Defensive methods for data attack 6.2.1 Secure multiparty computation |
26 | 6.2.2 Differential privacy 6.2.3 AI-based approaches |
27 | 6.2.4 Other methods |
29 | 6.3 Defensive methods for model attacks 6.3.1 Adversarial training 6.3.2 Malicious participants detection |
31 | 6.3.3 Resilient aggregation 6.3.4 Digital signature for model |
33 | 7. Evaluation 7.1 Privacy 7.1.1 Privacy risk identification |
35 | 7.1.2 Privacy risk analysis |
36 | 7.1.3 Privacy risk evaluation: |
37 | 7.2 Security 7.2.1 Security risk identification |
38 | 7.2.2 Security risk analysis |
39 | 7.2.3 Security risk evaluation |
41 | Annex A (normative) Example of a native privacy-preserving method |
42 | Annex B (normative) Example of a privacy protection method of member inference attacks |
43 | Annex C (informative) Example of the quantitative evaluation method |
44 | Annex D (informative) FML scenarios D.1 FML in distributed manufacturing management scenario D.1.1 Data owner |
45 | D.1.2 Coordinator D.1.3 Model user |
46 | D.2 FML in financial scenario D.2.1 Basic requirements D.2.2 Financial data security requirements |
47 | D.2.3 Web network security requirements D.2.4 Communication security requirements D.2.5 Verification and audit |
48 | D.2.6 Password requirements |
49 | Annex E (informative) Example of calculating the existing probability |
50 | Annex F (informative) Risk level for privacy and security F.1 Risk level for privacy |
51 | F.2 Risk level for security |
54 | Annex G (informative) Bibliography |
57 | Back Cover |