Accepted paper
, , 1900
Accepted Paper
The HO-FMN paper has been accepted in the Neurocomputing journal!
, , 1900
The HO-FMN paper has been accepted in the Neurocomputing journal!
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in CEUR Workshop Proceedings, Vol. 3260, pp. 150-168, 2022
This paper discusses methods to explain machine learning-based Domain Generation Algorithm (DGA) detectors using DNS traffic data.
Recommended citation: Piras, G., Pintor, M., Demetrio, L., & Biggio, B. (2022). "Explaining Machine Learning DGA Detectors from DNS Traffic Data." CEUR Workshop Proceedings, 3260, 150-168.
Download Paper
Published in ESANN-23, 2023
This paper presents enhancements to fast minimum-norm adversarial attacks through hyperparameter optimization techniques.
Recommended citation: Floris, G., Mura, R., Scionis, L., Piras, G., Pintor, M., Demontis, A., & Biggio, B. (2023). "Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization." arXiv preprint arXiv:2310.08177.
Download Paper
Published in 2023 International Conference on Machine Learning and Cybernetics (ICMLC), 2023
This research re-evaluates the effectiveness of adversarial pruning techniques in neural networks.
Recommended citation: Piras, G., Pintor, M., Demontis, A., & Biggio, B. (2023). "Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks." 2023 International Conference on Machine Learning and Cybernetics (ICMLC).
Download Paper
Published in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
This study explores adversarial attacks targeting uncertainty quantification methods in machine learning models.
Recommended citation: Ledda, E., Angioni, D., Piras, G., Fumera, G., Biggio, B., & Roli, F. (2023). "Adversarial Attacks Against Uncertainty Quantification." Proceedings of the IEEE/CVF International Conference on Computer Vision.
Download Paper
Published in arXiv preprint arXiv:2409.01249, 2024
This survey provides a comprehensive overview and benchmark of pruning methods aimed at enhancing adversarial robustness in neural networks.
Recommended citation: Piras, G., Pintor, M., Demontis, A., Biggio, B., Giacinto, G., & Roli, F. (2024). "Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial Robustness."
Download Paper
Published in arXiv preprint arXiv:2410.21952, 2024
This paper examines the robustness of adversarial training methods against uncertainty-based attacks.
Recommended citation: Ledda, E., Scodeller, G., Angioni, D., Piras, G., CinĂ , A. E., Fumera, G., Biggio, B., & Roli, F. (2024). "On the Robustness of Adversarial Training Against Uncertainty Attacks." arXiv preprint arXiv:2410.21952.
Download Paper
Published in Neurocomputing, Vol. 616, Article 128918, 2025
This study introduces HO-FMN, a method for hyperparameter optimization in fast minimum-norm adversarial attacks.
Recommended citation: Mura, R., Floris, G., Scionis, L., Piras, G., Pintor, M., Demontis, A., Giacinto, G., & Biggio, B. (2025). "HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks." Neurocomputing, 616, 128918.
Download Paper
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.