Page Not Found
Page not found. Your pixels are in another canvas. Read more
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas. Read more
This is a page not in th emain menu Read more
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Short description of portfolio item number 1
Read more
Short description of portfolio item number 2
Read more
Published in Arxiv Preprint, 2020
Randomized smoothing has achieved state-of-the-art certified robustness against l2-norm adversarial attacks. However, it is not wholly resolved on how to find the optimal base classifier for randomized smoothing. In this work, we employ a Smoothed WEighted ENsembling (SWEEN) scheme to improve the performance of randomized smoothed classifiers. We theoretically show how SWEEN can be trained to achieve near-optimal risk in the randomized smoothing regime. We also develop an adaptive prediction algorithm to reduce the prediction and certification cost of SWEEN models. Extensive experiments illustrates the benefits of employing SWEEN. Read more
Recommended citation: Chizhou Liu, **Yunzhen Feng**, Ranran Wang, Bin Dong https://arxiv.org/abs/2005.09363
Published in Arxiv Preprint, 2020
Understanding what information neural networks capture is an essential problem in deep learning, and studying whether different models capture similar features is an initial step to achieve this goal. Previous works sought to define metrics over the feature matrices to measure the difference between two models. In this work, we propose a novel metric that goes beyond previous approaches. We argue that we should design the metric based on a similar principle. For that, we introduce the transferred discrepancy (TD), a new metric that defines the difference between two representations based on their downstream-task performance. We also find that TD may be used to evaluate the effectiveness of different training strategies. This suggests a training strategy that leads to more robust representation also trains models that generalize better. Read more
Recommended citation: **Yunzhen Feng** *, Runtian Zhai*, Di He, Liwei Wang, Bin Dong https://arxiv.org/abs/2007.12446
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown! Read more
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field. Read more
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post. Read more
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post. Read more