Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false. Read more

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more

portfolio

publications

Enhancing Certified Robustness of Smoothed Classifiers via Weighted Model Ensembling

Published in Arxiv Preprint, 2020

Randomized smoothing has achieved state-of-the-art certified robustness against l2-norm adversarial attacks. However, it is not wholly resolved on how to find the optimal base classifier for randomized smoothing. In this work, we employ a Smoothed WEighted ENsembling (SWEEN) scheme to improve the performance of randomized smoothed classifiers. We theoretically show how SWEEN can be trained to achieve near-optimal risk in the randomized smoothing regime. We also develop an adaptive prediction algorithm to reduce the prediction and certification cost of SWEEN models. Extensive experiments illustrates the benefits of employing SWEEN. Read more

Recommended citation: Chizhou Liu, **Yunzhen Feng**, Ranran Wang, Bin Dong https://arxiv.org/abs/2005.09363

Transferred Discrepancy: Quantifying the Difference Between Representations

Published in Arxiv Preprint, 2020

Understanding what information neural networks capture is an essential problem in deep learning, and studying whether different models capture similar features is an initial step to achieve this goal. Previous works sought to define metrics over the feature matrices to measure the difference between two models. In this work, we propose a novel metric that goes beyond previous approaches. We argue that we should design the metric based on a similar principle. For that, we introduce the transferred discrepancy (TD), a new metric that defines the difference between two representations based on their downstream-task performance. We also find that TD may be used to evaluate the effectiveness of different training strategies. This suggests a training strategy that leads to more robust representation also trains models that generalize better. Read more

Recommended citation: **Yunzhen Feng** *, Runtian Zhai*, Di He, Liwei Wang, Bin Dong https://arxiv.org/abs/2007.12446

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post. Read more

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post. Read more