Jonathan Peck
- ORCID iD
- 0000-0003-2929-4164
Show
Sort by
-
- Journal Article
- A1
- open access
An introduction to adversarially robust deep learning
-
- PhD Thesis
- open access
Improving the robustness of deep neural networks to adversarial perturbations
(2023) -
Calibrated multi-probabilistic prediction as a defense against adversarial attacks
-
- Journal Article
- A1
- open access
Inline detection of DGA domains using side information
-
Regional image perturbation reduces Lp norms of adversarial examples while maintaining model-to-model transferability
-
- Journal Article
- A1
- open access
Detecting adversarial manipulation using inductive Venn-ABERS predictors
-
- Conference Paper
- C3
- open access
Distillation of Deep Reinforcement Learning Models using Fuzzy Inference Systems
-
- Conference Paper
- P1
- open access
Hardening DGA classifiers utilizing IVAP
-
- Journal Article
- A1
- open access
CharBot : a simple and effective method for evading DGA classifiers
-
- Conference Paper
- C1
- open access
Detecting adversarial examples with inductive Venn-ABERS predictors