Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
f259068
Reduce Aggregator Memory Usage (#1459)
MasterSkepticista Mar 24, 2025
045ff62
feat(serialiser): add CollaboratorSerialiser for tensor handling and …
theakshaypant Mar 24, 2025
9848cee
Skipping GaNDLF run from PR pipeline (#1482)
payalcha Mar 25, 2025
8d9cca5
To streamline Collaborator yaml for torch/mnist with all other cols.y…
payalcha Mar 25, 2025
4831dcd
update README.md and plan.yaml for `flower-app-pytorch` workspace (#1…
kminhta Mar 25, 2025
aabc4f4
Raise error if data is not found when running `flower-app-pytorch` wo…
kminhta Mar 25, 2025
180bba6
Task Runner E2E: Remove additional folder permissioning (#1485)
noopurintel Mar 25, 2025
377884e
Issue with Gandlf solved (#1486)
payalcha Mar 25, 2025
e6e2219
OpenFL v1.8 release notes (#1487)
teoparvanov Mar 26, 2025
9d6356c
E2E automation for `flower-app-pytorch` workspace (#1494)
noopurintel Mar 26, 2025
91cbb31
1.8: Address Coverity issues related to `flower-app-pytorch` workspac…
kminhta Mar 26, 2025
5624958
Hotfix for PQ pipeline (#1497)
noopurintel Mar 26, 2025
1b93515
Add trufflehog to scan secrets in repo and logs file (#1495)
payalcha Mar 27, 2025
07ca6a6
Change openfl version from 1.8 to 1.9.0.dev (#1503)
noopurintel Mar 27, 2025
3ca96c0
PQ Pipeline: Ensure usage of same commit id across workflow jobs (#1501)
noopurintel Mar 27, 2025
3cc2e6e
fix(collaborator): desrialisation
theakshaypant Apr 7, 2025
3b61140
fix(tests): collaborator tensor codec changes
theakshaypant Apr 7, 2025
d74a183
Added conda installation instructions (#1508)
sarthakpati Apr 1, 2025
f2d1176
Bump pytest-asyncio from 0.25.3 to 0.26.0 (#1509)
dependabot[bot] Apr 3, 2025
620be3f
add hippmapp3r (#1498)
porteratzo Apr 3, 2025
cf50ad8
Bump ruff from 0.9.9 to 0.11.2 (#1515)
payalcha Apr 4, 2025
4d6370c
Task Runner E2E: Simplify the components' `start` and `stop` processe…
noopurintel Apr 4, 2025
a15a400
Bump pytest from 8.3.4 to 8.3.5 (#1510)
dependabot[bot] Apr 4, 2025
714bf6f
Bump aquasecurity/trivy-action in the github-actions group (#1512)
dependabot[bot] Apr 5, 2025
8771a36
Change to add time taken for each round in metrics (#1517)
payalcha Apr 5, 2025
1b5f349
fix(tests): collaborator tensor codec changes
theakshaypant Apr 7, 2025
33d29d9
fix(tests): remove redundant tests
theakshaypant Apr 7, 2025
9c832c1
fix(tensor codec): remove aggregator serialisation handling code and …
theakshaypant Apr 7, 2025
f05787b
fix(collaborator): remove compression_pipeline completely from collab…
theakshaypant Apr 7, 2025
ca48417
fix(collaborator serialiser): use defined aggregator client to initia…
theakshaypant Apr 7, 2025
b9944a4
fix(collaborator tests): modify/remove tests for find_dependencies
theakshaypant Apr 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .github/actions/trufflehog_logs_scan/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
# Composite Action to run pre-test functions for task runner end to end tests

name: 'TruffleHog Logs Scan'
description: 'Run TruffleHog scan on logs'

runs:
using: 'composite'
steps:
- name: Run trufflehog for all log file in results
id: trufflehog_scan
run: |
export PYTHONPATH="$PYTHONPATH:."
python .github/config/parse_task_runner_logs.py --log_dir ~/results
shell: bash
105 changes: 105 additions & 0 deletions .github/config/parse_task_runner_logs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Copyright 2020-2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os
import subprocess
import json
import logging
import argparse

# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)

def get_log_files(log_dir):
"""
Get all .log files in the specified directory and its subdirectories.
Save the list of log files to a text file.
Args:
log_dir (str): Path to the directory containing log files.

Returns:
list: List of .log files found in the directory.
"""
if not os.path.exists(log_dir):
logger.error(f"Directory '{log_dir}' does not exist.")
exit(1)

log_files = []
for root, _, files in os.walk(log_dir):
for file in files:
if file.endswith(".log"):
log_files.append(os.path.join(root, file))
return log_files


def run_trufflehog(log_file):
"""
Run TruffleHog on the specified log file and return the number of unverified secrets found.
Args:
log_file (str): Path to the log file to scan.
Returns:
int: Number of unverified secrets found in the log file.
"""
try:
# Run TruffleHog with JSON output and capture the output
cmd = f'trufflehog filesystem {log_file} --no-update --json'
result = subprocess.run(
cmd, capture_output=True, shell=True, text=True, timeout=30, check=True
)
# Extract the last JSON object from the output
lines = result.stderr.strip().split("\n")
last_json = json.loads(lines[-1])
# throw error if las_json not have unverified_secrets
if "unverified_secrets" not in last_json:
raise json.JSONDecodeError("unverified_secrets not found in JSON output", "", 0)
else:
logger.info(f"Unverified secrets found: {last_json['unverified_secrets']}")
# Return the unverified_secrets count
return last_json.get("unverified_secrets", 0)
except subprocess.CalledProcessError as e:
logger.error(f"Error running TruffleHog on file {log_file}: {e}")
raise e
except json.JSONDecodeError as e:
logger.error(f"Error decoding JSON output for file {log_file}: {e}")
raise e


def main(log_dir):
"""
Main function to scan log files for unverified secrets.
Args:
log_dir (str): Path to the directory containing log files.
"""
# Get all .log files
log_files = get_log_files(log_dir)
if not log_files:
logger.info("No .log files found.")
return

# Scan each log file with TruffleHog
for log_file in log_files:
logger.info(f"Scanning file: {log_file}")
unverified_secrets = run_trufflehog(log_file)

if unverified_secrets > 0:
logger.error(f"File '{log_file}' contains {unverified_secrets} unverified secrets.")
exit(1)

logger.info("All files scanned successfully. No unverified secrets found.")


if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Scan log files for unverified secrets.")
parser.add_argument(
"--log_dir",
type=str,
required=True,
help="Path to the directory containing log files."
)
args = parser.parse_args()
log_dir = os.path.expanduser(args.log_dir)
main(log_dir)
56 changes: 55 additions & 1 deletion .github/workflows/pq_pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,34 @@ concurrency:
group: ${{ github.workflow }}-${{ github.base_ref }}-${{ github.head_ref }}-${{ github.actor }}

jobs:
set_commit_id_for_all_jobs: # Do not change this job name, it is used by other jobs to get the commit ID
name: Get/Set Commit ID
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
outputs:
commit_id: ${{ steps.set_commit_id.outputs.commit_id }}
steps:
- name: Checkout OpenFL repository
uses: actions/checkout@v4

- name: Set commit ID
id: set_commit_id
run: |
echo "commit_id=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT

- name: Print commit ID to summary
run: |
echo "Commit ID used: ${{ steps.set_commit_id.outputs.commit_id }}" >> $GITHUB_STEP_SUMMARY

wf_mnist_local_runtime:
if: |
(github.event_name == 'schedule' && github.repository_owner == 'securefederatedai') ||
(github.event_name == 'workflow_dispatch')
name: Workflow MNIST Local Runtime
needs: set_commit_id_for_all_jobs
uses: ./.github/workflows/workflow_interface_101_mnist.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

wf_watermark_e2e:
if: |
Expand All @@ -33,6 +55,8 @@ jobs:
name: Workflow Watermarking Federated Runtime E2E
needs: wf_mnist_local_runtime
uses: ./.github/workflows/wf_watermarking_fed_runtime.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

wf_secagg_e2e:
if: |
Expand All @@ -41,13 +65,18 @@ jobs:
name: Workflow Secure Aggregation Federated Runtime E2E
needs: wf_watermark_e2e
uses: ./.github/workflows/wf_secagg_fed_runtime.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

task_runner_e2e:
if: |
(github.event_name == 'schedule' && github.repository_owner == 'securefederatedai') ||
(github.event_name == 'workflow_dispatch')
name: TaskRunner E2E
needs: set_commit_id_for_all_jobs
uses: ./.github/workflows/task_runner_basic_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

task_runner_resiliency_e2e:
if: |
Expand All @@ -56,6 +85,8 @@ jobs:
name: TaskRunner Resiliency E2E
needs: task_runner_e2e
uses: ./.github/workflows/task_runner_resiliency_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

task_runner_fedeval_e2e:
if: |
Expand All @@ -64,13 +95,18 @@ jobs:
name: TaskRunner FedEval E2E
needs: task_runner_e2e
uses: ./.github/workflows/task_runner_fedeval_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

task_runner_secure_agg_e2e:
if: |
(github.event_name == 'schedule' && github.repository_owner == 'securefederatedai') ||
(github.event_name == 'workflow_dispatch')
name: TaskRunner Secure Aggregation E2E
needs: set_commit_id_for_all_jobs
uses: ./.github/workflows/task_runner_secure_agg_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

task_runner_straggler_e2e:
if: |
Expand All @@ -79,6 +115,8 @@ jobs:
name: TaskRunner Straggler E2E
needs: task_runner_resiliency_e2e
uses: ./.github/workflows/task_runner_straggler_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

# run basic dockerized test with keras/mnist
task_runner_dockerized_e2e:
Expand All @@ -88,11 +126,27 @@ jobs:
name: TaskRunner Dockerized E2E
needs: task_runner_straggler_e2e
uses: ./.github/workflows/task_runner_dockerized_ws_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

# run testssl for task runner
task_runner_secret_ssl_e2e:
if: |
(github.event_name == 'schedule' && github.repository_owner == 'securefederatedai') ||
(github.event_name == 'workflow_dispatch')
name: TaskRunner Secret SSL E2E
uses: ./.github/workflows/task_runner_secret_tls_e2e.yml
needs: set_commit_id_for_all_jobs
uses: ./.github/workflows/task_runner_secret_tls_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}

# run flower app with pytorch
task_runner_flower_app_pytorch:
if: |
(github.event_name == 'schedule' && github.repository_owner == 'securefederatedai') ||
(github.event_name == 'workflow_dispatch')
name: TaskRunner Flower App Pytorch E2E
needs: set_commit_id_for_all_jobs
uses: ./.github/workflows/task_runner_flower_e2e.yml
with:
commit_id: ${{ needs.set_commit_id_for_all_jobs.outputs.commit_id }}
25 changes: 13 additions & 12 deletions .github/workflows/task_runner_basic_e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ name: Task_Runner_E2E # Please do not modify the name as it is used in the comp

on:
workflow_call:
inputs:
commit_id:
required: false
type: string
workflow_dispatch:
inputs:
num_rounds:
Expand Down Expand Up @@ -63,6 +67,7 @@ env:
MODEL_NAME: ${{ inputs.model_name || 'all' }}
PYTHON_VERSION: ${{ inputs.python_version || 'all' }}
JOBS_TO_RUN: ${{ inputs.jobs_to_run || 'all' }}
COMMIT_ID: ${{ inputs.commit_id || github.sha }} # use commit_id from the calling workflow

jobs:
input_selection:
Expand Down Expand Up @@ -151,9 +156,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -201,9 +204,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -251,9 +252,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -301,9 +300,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -342,6 +339,8 @@ jobs:
- name: Checkout OpenFL repository
id: checkout_openfl
uses: actions/checkout@v4
with:
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -378,6 +377,8 @@ jobs:
- name: Checkout OpenFL repository
id: checkout_openfl
uses: actions/checkout@v4
with:
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down
21 changes: 9 additions & 12 deletions .github/workflows/task_runner_dockerized_ws_e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ name: Task_Runner_Dockerized_E2E # Please do not modify the name as it is used

on:
workflow_call:
inputs:
commit_id:
required: false
type: string
workflow_dispatch:
inputs:
num_rounds:
Expand Down Expand Up @@ -37,6 +41,7 @@ env:
NUM_ROUNDS: ${{ inputs.num_rounds || '5' }}
NUM_COLLABORATORS: ${{ inputs.num_collaborators || '2' }}
JOBS_TO_RUN: ${{ inputs.jobs_to_run || 'all' }}
COMMIT_ID: ${{ inputs.commit_id || github.sha }} # default to current commit if not provided

jobs:
input_selection:
Expand Down Expand Up @@ -72,9 +77,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -115,9 +118,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -158,9 +159,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down Expand Up @@ -201,9 +200,7 @@ jobs:
id: checkout_openfl
uses: actions/checkout@v4
with:
fetch-depth: 2 # needed for detecting changes
submodules: "true"
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.COMMIT_ID }}

- name: Pre test run
uses: ./.github/actions/tr_pre_test_run
Expand Down
Loading