New developments in scientific research achievements of the school (1)

发布者:李茜发布时间:2024-05-14浏览次数:39

FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face Reconstruction

In this article, we present a large-scale detailed 3D face dataset, FaceScape , and the corresponding benchmark to evaluate single-view facial 3D reconstruction. By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input. FaceScape dataset releases 16,940 textured 3D faces, captured from 847 subjects and each with 20 specific expressions. The 3D models contain the pore-level facial geometry that is also processed to be topologically uniform. These fine 3D facial models can be represented as a 3D morphable model for coarse shapes and displacement maps for detailed geometry. Taking advantage of the large-scale and high-accuracy dataset, a novel algorithm is further proposed to learn the expression-specific dynamic details using a deep neural network. The learned relationship serves as the foundation of our 3D face prediction system from a single image input. Different from most previous methods, our predicted 3D models are riggable with highly detailed geometry under different expressions. We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction. The accuracy is reported and analyzed on the dimensions of camera pose and focal length, which provides a faithful and comprehensive evaluation and reveals new challenges. The unprecedented dataset, benchmark, and code have been released to the public for research purpose.

Links:https://ieeexplore.ieee.org/document/10226244


Online machine learning algorithms

Online machine learning is committed to solving the key defect that traditional batch machine learning cannot process streaming data (such as Internet data), so it is a key research area of academic concern.

Professor Yang Lin from the School of Intelligent Science and Technology at Nanjing University has achieved a series of high-quality results in his research on basic algorithms for online learning and optimization. This study has made breakthrough progress in the application, efficiency, and scalability of online learning and optimization technologies through a series of innovative technological means. In addition, this achievement also provides innovative solutions to the communication resource bottleneck problem of learning and optimization algorithms in distributed and parallel computing environments, which is widely concerned both domestically and internationally. By comparing with current mainstream algorithms, it is found that this technological achievement can achieve world leading communication resource consumption while ensuring the optimal performance of learning and optimization algorithms.

Links:

1. https://dl.acm.org/doi/pdf/10.1145/3570618

2. https://openreview.net/pdf?id=QTXKTXJKIh

3. https://proceedings.mlr.press/v216/wang23a/wang23a.pdf

4. https://proceedings.mlr.press/v206/chen23c/chen23c.pdf


Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation

The stochastic proximal point (SPP) methods have gained recent attention for stochastic optimization, with strong convergence guarantees and superior robustness to the classic stochastic gradient descent (SGD) methods showcased at little to no cost of computational overhead added. In this article, we study a minibatch variant of SPP, namely M-SPP, for solving convex composite risk minimization problems. The core contribution is a set of novel excess risk bounds of M-SPP derived through the lens of algorithmic stability theory. Particularly under smoothness and quadratic growth conditions, we show that M-SPP with minibatch-size n and iteration count T enjoys an in-expectation fast rate of convergence consisting of an O  1 T 2  bias decaying term and an O  1 nT  variance decaying term. In the small-n-large-T setting, this result substantially improves the best known results of SPP-type approaches by revealing the impact of noise level of model on convergence rate. In the complementary small-T-large-n regime, we propose a two-phase extension of M-SPP to achieve comparable convergence rates. Additionally, we establish a deviation bound on the parameter estimation error of a sampling-without-replacement variant of M-SPP, which holds with high probability over the randomness of data while in expectation over the randomness of algorithm. Numerical evidences are provided to support our theoretical predictions when substantialized to Lasso and logistic regression model.

Links: https://www.jmlr.org/papers/volume24/21-1219/21-1219.pdf


EXPONENTIAL GENERALIZATION BOUNDS WITH NEAR-OPTIMAL RATES FOR Lq -STABLE ALGORITHMS

The stability of learning algorithms to changes in the training sample has been actively studied as a powerful proxy for reasoning about generalization. Recently, exponential generalization and excess risk bounds with near-optimal rates have been obtained under the stringent and distribution-free notion of uniform stability (Bousquet et al., 2020; Klochkov & Zhivotovskiy, 2021). In the meanwhile, under the notion of Lq-stability, which is weaker and distribution dependent, exponential generalization bounds are also available yet so far only with sub-optimal rates. Therefore, a fundamental question we would like to address in this paper is whether it is possible to derive near-optimal exponential generalization bounds for Lq-stable learning algorithms. As the core contribution of the present work, we give an affirmative answer to this question by developing strict analogues of the near-optimal generalization and risk bounds of uniformly stable algorithms for Lq-stable algorithms. Further, we demonstrate the power of our improved Lqstability and generalization theory by applying it to derive strong sparse excess risk bounds, under mild conditions, for computationally tractable sparsity estimation algorithms such as Iterative Hard Thresholding (IHT).

Links: https://openreview.net/pdf?id=1_jtWjhSSkr


L2-Uniform Stability of Randomized Learning Algorithms: Sharper Generalization Bounds and Confidence Boosting

Exponential generalization bounds with near-optimal rates have recently been established for uniformly stable algorithms (Feldman and Vondrák, 2019; Bousquet et al., 2020). We seek to extend these best known high probability bounds from deterministic learning algorithms to the regime of randomized learning. One simple approach for achieving this goal is to define the stability for the expectation over the algorithm’s randomness, which may result in sharper parameter but only leads to guarantees regarding the on-average generalization error. Another natural option is to consider the stability conditioned on the algorithm’s randomness, which is way more stringent but may lead to generalization with high probability jointly over the randomness of sample and algorithm. The present paper addresses such a tension between these two alternatives and makes progress towards relaxing it inside a classic framework of confidence-boosting. To this end, we first introduce a novel concept of L2-uniform stability that holds uniformly over data but in secondmoment over the algorithm’s randomness. Then as a core contribution of this work, we prove a strong exponential bound on the first-moment of generalization error under the notion of L2-uniform stability. As an interesting consequence of the bound, we show that a bagging-based meta algorithm leads to near-optimal generalization with high probability jointly over the randomness of data and algorithm. We further substantialize these generic results to stochastic gradient descent (SGD) to derive sharper exponential bounds for convex or non-convex optimization with natural time-decaying learning rates, which have not been possible to prove with the existing stability-based generalization guarantees.

Links: https://openreview.net/pdf?id=GEQZ52oqxa


Learning Neural Proto-face Field for Disentangled 3D Face Modeling In the Wild

Generative models show good potential for recovering 3D faces beyond limited shape assumptions. While plausible details and resolutions are achieved, these models easily fail under extreme conditions of pose, shadow or appearance, due to the entangled fitting or lack of multiview priors. To address this problem, this paper presents a novel Neural Proto-face Field (NPF) for unsupervised robust 3D face modeling. Instead of using constrained images as Neural Radiance Field (NeRF), NPF disentangles the common/specific facial cues, i.e., ID, expression and scene-specific details from in-the-wild photo collections. Specifically, NPF learns a face prototype to aggregate 3D-consistent identity via uncertainty modeling, extracting multi-image priors from a photo collection. NPF then learns to deform the prototype with the appropriate facial expressions, constrained by a loss of expression consistency and personal idiosyncrasies. Finally, NPF is optimized to fit a target image in the collection, recovering specific details of appearance and geometry. In this way, the generative model benefits from multi-image priors and meaningful facial structures. Extensive experiments on benchmarks show that NPF recovers superior or competitive facial shapes and textures, compared to state-of-the-art methods.

Links:http://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_Learning_Neural_Proto-Face_Field_for_Disentangled_3D_Face_Modeling_in_CVPR_2023_paper.pdf