Seminar of Departement 1 at LORIA
This page presents seminars of the department Algorithms,
Computation, Image and Geometry
together with more specialized team seminars from teams
ABC,
Adagio,
Caramba,
Gamble,
Tangram,
MFX, and
Pixel.
2024
Day of the Department / Journée des doctorants du département
28 juin 2024, salle A008
- 9:00 - 9:20 Café
- 9:20 - 10:40
-
Julien Soumier, Caramba:
Going in higher dimensions to break and
create isogeny based protocols
-
Sarah Wajsbrot, Gamble:
Robust optimization through a geometric lens
-
Nicolas Maignan, Tangram:
Studies and challenges in legacy
video colorization
-
Medhi Kermaoui, Caramba:
Isogeny-based cryptography
- 10:40 - 11:10 Break
- 11:10 - 12:10
-
Hugo Leblond, Tangram:
Study of implicit and hybrid representations
for Simultaneous Localization And Mapping (SLAM) with LiDAR
-
Marie Bolzer, Caramba:
Lightweight implementations of S-boxes in
symmetric cryptography
-
Camille Lanuel
Gamble:
Computing an epsilon-net on a
hyperbolic surface
Camille Lanuel. Gamble. Computing an epsilon-net on a
hyperbolic surface
An epsilon-net of a metric space is a set of points such that the
balls of radius epsilon centered at those points cover the space, and
two distinct points of the epsilon-net are at least epsilon apart.
Hyperbolic surfaces are surfaces that are locally isometric to the
hyperbolic plane. There is not a lot of algorithmic tools to handle
them, so we want to compute epsilon-nets on them. This will open the
door to approximation algorithms on hyperbolic surfaces and help to
study them. In my presentation, I will explain what is a hyperbolic
surface and how we can compute epsilon-nets on it using a Delaunay
refinement algorithm.
Nicolas Maignan. Tangram. Studies and challenges in legacy
video colorization
Video colorization gives a new look to archival documents, making
them more attractive among new generations. This work not only
requires in-depth research to ensure historical accuracy, but is
also time-consuming due to the processing of the thousands of images
making up a video sequence. Current methods aimed at streamlining
this process remain inefficient, with significant variations in hue
over time, incomplete results for automatic methods, and artifacts
appearing during frame-by-frame colorization. Despite reduced
computation times, these shortcomings persist, forcing professionals
to rework sequences manually, even after using algorithms guided by
already colorized frames. Moreover, there is no database of old
video in color, so the ground truth is not available to train a
classical big data model.
The aim of this thesis is to develop colorization methods that are
more stable over time, enabling experts to produce results with the
desired hues and saturations and low data consuming. The first
method we propose is based on a neural network using spatial and
temporal convolution kernels to diffuse the colors from a few frames
to the rest of the video. This first approach is fully unsupervised
and does not require any database. The aim of this approach is to
guarantee consistent colorization faithful to the user's style. In a
second step, we studied the limitations of the well-known DeOldify
method to check if this approach is accurate with a few amounts of
data and with Technicolor movie, aiming to use similarly contrasted
video.
Sarah Wajsbrot. Gamble. Robust optimization through a geometric lens
How to deal with uncertain data in an optimization problem?
Uncertainty may come from (i) measurement or estimation errors of
physical and environmental parameters, or unpredictable agent
behaviors and also from (ii) implementation errors for computational
reasons. Robust optimization is a methodology to obtain solutions
that are worst-case optimal i.e. resistant to any outcome of the
uncertain data. We study an instance of two-stage robust linear
program: here we are allowed to take some decisions once the
uncertainty is revealed. Though it is known to be a hard problem, it
can be approximated by a "k-adaptability" version: before revealing
uncertainty, preselect k decisions and choose the best after
uncertainty is revealed. Is the latter tractable, what is the
geometry of its feasible region, and can we use tools from
combinatorial convexity to understand better these problems?
Marie Bolzer. Caramba. Lightweight implementations of S-boxes in
symmetric cryptography.
Cryptography aims to secure connections, data and, more generally,
information systems. However, cryptography adds a significant cost to system and
network infrastructures. Two major issues stand out, reducing costs as much as
possible and obtaining and verifying security. The aim of my thesis is to use
algorithms and develop tools to make progress on these issues.
I will focus on S-boxes which are non-linear cryptographic components. There are
omnipresent in standard block cipher constructions used in symmetric cryptography.
How to optimize its implementation, depending on context and technology, is
therefore an important issue.
Hugo Leblond. Tangram. Study of implicit and hybrid representations
for Simultaneous Localization And Mapping (SLAM) with LiDAR.
This thesis explores the application of implicit and hybrid
representations in simultaneous localization and mapping (SLAM) using
LiDAR technology. The presentation begins with a review of the state of
the art and a comparison between Neural Radiance Fields (NeRFs) and
Gaussian Splatting. Gaussian Splatting will then be studied in greater
depth, chosen for its precise environmental rendering capabilities and
memory efficiency. The presentation will then propose directions for
ongoing work aimed at overcoming the limitations of current methods,
improving accuracy, computational efficiency and management of large,
complex environments.
Julien Soumier. Caramba. Going in higher dimensions to break and
create isogeny based protocols.
A cornerstone of modern cryptography is the Diffie-Hellman algorithm,
which allows two parties to safely agree on a shared key. It is based on
the discrete logarithm problem. This protocol was considered very secure
until the advent of quantum algorithms. However, with the potential
release of quantum computers, we need to develop new key exchange
protocols. One candidate to resist quantum attacks is isogeny-based
cryptography. I will present the basics of this theory and explain why
current research is exploring higher-dimensional isogenies.
Medhi Kermaoui. Caramba. Isogeny-based cryptography.
In 1994, Peter Shor designed a quantum algorithm for efficiently
factoring large numbers and solving the discrete logarithm problem, two
fundamental problems on which public-key cryptography is based. In light
of this threat, the field of post-quantum cryptography has emerged,
aiming to develop cryptographic techniques that are secure against
quantum attacks. Isogeny-based cryptography is one of the candidates for
quantum-resistant cryptography.
MFX Seminar
3 April, 10:30
C103
Eric Garner
TU Delft
Computational design of patient-specific implants: from micro-architected materials to shape-matching geometry
Despite over a century's worth of technical improvements, the long-term
survivability associated with orthopedic implants continues to fall short. In
contrast to earlier designs, implant failure is no longer caused by a
structural failure of the implant itself. Rather, it results from the
plant's
long-term deleterious effects on the surrounding bone tissue. Over time,
changes in mechanical loading conditions induce a reduction in bone density,
increasing the risk of fracture and destabilizing the bone-implant interface.
The mechanisms which drive peri-prosthetic bone loss are complicated and
inter-related. Add to this the unique morphological variations between
patients, and an optimal one-size-fits-all solution seems unlikely. This
research aimed to develop computational design strategies that automatically
generate patient-specific implants based on medical imaging, in an effort to
improve their long-term survivability.
15 March, 10:00
A006
Bernhard Kerbl
TU Wien
Compute-Based Rendering across the Board: Efficient Methods for Meshes,
Point Clouds and Radiance Fields
Dr. Kerbl will present his on-going research on compute-based rendering
and its use across 3D representations and applications. Modern 3D
content, both synthetic and captured, contains an unprecedented amount
of geometric detail. At the same time, user appreciation of 3D content
demands low-latency feedback loops: processing, exploring or modifying
3D scenes should be fast, or---even better---instant. Failure to meet
performance targets is often answered by applying more raw compute
power to the problem. This policy has led to systematic hardware
hoarding, a rift between "GPU-rich" and "GPU-poor", increased reliance
on cloud computing and an overall rise in global resource consumption.
A more sustainable solution to this challenge lies in the careful
design of inherently parallel algorithms, finding novel data structures
and optimal 3D scene representations for specific tasks. One pillar of
the research by Dr. Kerbl et al. towards this goal is the use of
compute-based rendering: exploiting GPU compute to assist or replace
the fixed-function pipeline for image formation. This talk illustrates
concrete examples where compute-based rendering achieves or surpasses
state-of-the-art results, given only a fraction of its competitors'
runtime resources. Apart from image formation itself, this talk will
also discuss recent applications in interactive editing and
differentiable rendering methods for radiance fields (NeRFshop, 3D
Gaussian Splatting).
2023
14 December, 10:30
B013
Charline Grenier
University of Strasbourg
Color-mapped noise vector fields for generating procedural
micro-patterns
Stochastic micro-patterns successfully enhance the realism of virtual
scenes. Procedural models using noise combined with transfer functions
are extremely efficient to generate them. However, most patterns employ
1D transfer functions, which assign color, transparency, or other
material attributes, based on a single scalar noise. Multi-dimensional
transfer functions have received widespread attention in other fields,
such as scientific volume rendering. However, their potential has not
yet been well explored for modelling micro-patterns in the field of
procedural texturing. We propose a procedural model for stochastic
patterns, defined as the composition of a bi-dimensional transfer
function (a.k.a. color-map) with a stochastic vector field. Our model
encompasses several existing procedural noises, such as Gaussian noise
and Phasor noise, and generates a much larger gamut of patterns,
including locally structured patterns. It can also be used to
procedurally enhance virtual terrains in real-time, for example by
adding spatially varying erosion patterns. By adapting the Phasor noise
to the specific characteristics of terrains (water flowing, control on
slopes) we create relief details corresponding to the underlying
terrain characteristics, that preserve the coherence of generated
landforms and allow artistic control through a palette of control maps.
26 octobre, 14:00
A008
Lionel Pournin
Université Sorbonne Paris Nord
Kissing polytopes
Consider two disjoint lattice polytopes contained in the hypercube
[0,k]^d. The question of how close such a pair of polytopes can be
stems from various contexts where this minimal distance appears in
complexity bounds of optimization algorithms. Nearly matching lower and
upper bounds on this distance will be provided and its exact
computation discussed. Similar bounds will be given in the case of
disjoint rational polytopes whose binary encoding length is prescribed.
This talk is based on joint work with Antoine Deza, Shmuel Onn, and
Sebastian Pokutta.
Jean Cardinal
Université Libre de Bruxelles
Improved Algebraic Degeneracy Testing
In the classical linear degeneracy testing problem, we are given n real
numbers and a k-variate linear polynomial F, for some constant k, and
have to determine whether there exist k numbers a1, .. ,ak from the set
such that F(a1, .. ,ak)=0. We consider a generalization of this problem
in which F is an arbitrary constant-degree polynomial, we are given k
sets of n numbers, and have to determine whether there exists a k-tuple
of numbers, one in each set, on which F vanishes. We give the first
improvements over the naive O*(n^{k-1}) algorithm for this problem
(where the O*(.) notation omits subpolynomial factors), in both the
real RAM and algebraic decision tree models of computation.
All our results rely on an algebraic generalization of the standard
meet-in-the-middle algorithm for k-SUM, powered by recent algorithmic
advances in the polynomial method for semi-algebraic range searching.
In fact, our main technical result is much more broadly applicable, as
it provides a general tool for detecting incidences and other
interactions between points and algebraic surfaces in any dimension. In
particular, it yields an efficient algorithm for a general, algebraic
version of Hopcroft's point-line incidence detection problem in any
dimension.
This is a joint work with Micha Sharir (S oC G'23).
5 octobre, 10:00
B013
Jing Ren
ETH Zurich
Digital 3D Smocking Design
We develop an optimization-based method to model smocking, a surface
embroidery technique that provides decorative geometric texturing while
maintaining stretch properties of the fabric. During smocking, multiple
pairs of points on the fabric are stitched together, creating
non-manifold geometric features and visually pleasing textures.
Designing smocking patterns is challenging, because the outcome of
stitching is unpredictable: the final texture is often revealed only
when the whole smocking process is completed, necessitating painstaking
physical fabrication and time consuming trial-and-error
experimentation. This motivates us to seek a digital smocking design
method. Straightforward attempts to compute smocked fabric geometry
using surface deformation or cloth simulation methods fail to produce
realistic results, likely due to the intricate structure of the
designs, the large number of contacts and high-curvature folds. We
instead formulate smocking as a graph embedding and shape deformation
problem. We extract a coarse graph representing the fabric and the
stitching constraints, and then derive the graph structure of the
smocked result. We solve for the 3D embedding of this graph, which
in turn reliably guides the deformation of the high-resolution
fabric mesh. Our optimization based method is simple, efficient,
and flexible, which allows us to build an interactive system for
smocking pattern exploration. To demonstrate the accuracy of our
method, we compare our results to real fabrications on a large set
of smocking patterns.
Day of the Department / Journée des doctorants du département
4 juillet 2023, salle C005
- 9:00 - 9:30 Café
- 9:30 - 10:30
-
Marco Freire, MFX:
PCBend: Light Up Your 3D Shapes With Foldable Circuit Boards
-
Aude Marêché, Adagio:
Analyse locale d'une surface discrète par secteurs planaires et application à
l'estimation de la normale
-
Haetham Al Aswad, Caramba:
Discrete Logarithm Factory
- 10:30 - 11:00 Break
- 11:00 - 12:00
-
Radhouane Jilani, Tangram:
Simulation of catheter dynamics for neuroradiology
-
Ana Margarita Rodriguez Cordero,
Caramba:
T-wise independence for block ciphers
-
Bastien Laboureix, Adagio:
Connexité des hyperplans arithmétiques
- 12:00 - 13:30 Lunch
- 13:30 - 14:30
-
Camille Lanuel, Gamble:
A toolbox for hyperbolic surfaces
-
Antoine Leudière, Caramba:
Drinfeld modules in SageMath
-
Yoann Coudert-Osmont, Pixel:
A new quantization algorithm for quad remeshing
Camille Lanuel. Gamble. A toolbox for hyperbolic surfaces
Hyperbolic surfaces are 2-dimensional Riemannian manifolds that
are locally isometric to the hyperbolic plane. They are hard to study in
practice and the litterature lacks of tools to handle them. The diameter of a
surface is a natural parameter to consider. However, computing it for a generic
hyperbolic surface is hopeless, so my goal is to find an algorithm to
approximate it.
Bastien Laboureix. Adagio. Connexité des hyperplans arithmétiques
Quand vous étiez petits, on vous a dit que les droites étaient continues et
n'avaient pas d'épaisseur. Mais, en prenant une image numérique de droite et en
zoomant au maximum, on observe plutôt un amas de pixel qu'une forme rectiligne
continue. Et si nous étudiions ces droites numériques, discrètes, plutôt que
leurs homologues euclidiennes ? Et si nous nous intéressions à une nouvelle
géométrie ? Car, au-delà des droites, on peut regarder des segments, des
polygones, voire monter en dimension et étudier des plans, des sphères, des
hyperplans. Evidemment, la géométrie euclidienne ne s'applique pas sur les
pixels : plus de division, adieu les limites, tout n'est plus qu'arithmétique !
Recommençons donc la géométrie depuis le début, via un problème simple sur les
plans !
Haetham Al Aswad. Caramba. Discrete Logarithm Factory
The Number Field Sieve (NFS) and its numerous variants is the
best algorithm to compute discrete logarithms in finite fields of medium and
large characteristic. We present an algorithm to quickly compute discrete
logarithms in a wide range of finite fields at the cost of a unique
precomputation. The original idea was the Factorization Factory introduced by Coppersmith
to deal with the integer factorization problem. In 2013,
Barbulescu adapted the Factory idea to the discrete logarithm in prime finite fields
(i.e. with extension degree equal to one). Our work generalizes the
Factory idea to the discrete logarithm in finite fields of any extension degree.
We combine this idea with two other variants of NFS, namely the tower and
special variant. This combination leads to improvements in the asymptotic
complexity. Besides, we lay out analytic estimates of the practicality of this
method for 1024-bit targets and extension degree 6.
Yoann Coudert-Osmont. Pixel. A new quantization algorithm for quad remeshing
In the context of numerical simulation using finite element method,
quadrilaterals meshes are sometimes preferred to triangular meshes. While
generating triangular meshes is well handled, generating quad meshes is more
difficult. Some methods are robust and some others produce high-quality meshes
but may fail. We are interested in the global parametrization method which
belongs to the second category. First, I will explain how this method works.
Secondly I will present a new algorithm we have designed to solve one of the
last steps of this method, which involve an integer-valued optimization problem.
Marco Freire. MFX. PCBend: Light Up Your 3D Shapes With Foldable Circuit Boards
Have you ever wanted to make a screen out of an object? Now you
can with PCBend, a pipeline for automatic foldable circuit layout generation.
We unfold an input object and use it as the blueprint for a printed circuit
board, which we densely pack with addressable RGB LEDs. The pipeline outputs
fabrication-ready files, all that remains is passing the order, 3D printing a
core, and putting everything together. You can design custom lighting effects
with a shader-like interface and make stunning visuals!
Ana Margarita Rodriguez Cordero. Caramba. T-wise
independence for block ciphers
At CRYPTO 2021, Liu, Tessaro and Vaikuntanathan used the notion of (almost) t-wise independence to prove the security of block ciphers. This notion studies how close t distinct outputs of the cipher are to t uniform sampled input elements. In particular, they applied this idea to the AES cipher and used pairwise independence to assess its security against linear and differential attacks. In this talk I'll explain this technique and give leads to extend its use.
Antoine Leudière. Caramba. Drinfeld modules in SageMath
Drinfeld modules are mathematical objects that were introduced in the 1970s to
answer profound questions in algebra, particularly in the context of the class
field theory for function fields. Over time, the theory of Drinfeld modules has
become well-established and is now recognized as an essential tool in various
significant areas of Mathematics, such as the Langlands program. Notably,
Laurent Lafforgue, who used Drinfeld modules to solve open problems from the
Langlands program for function fields, was awarded with the Fields Medal in
2002.
Prior to our contribution, Drinfeld modules were absent from all standard
distributions of computer algebra systems. We decided to implement Drinfeld
modules in SageMath - a popular free and open source computer algebra system.
Our implementation is available in the standard distribution of SageMath since
version 10.0.
In this presentation, we will review of the development process, with a focus
on the crucial issue of data representation. We will see that Drinfeld modules
do not directly fit the SageMath paradigm of object representation. Our primary
task was thus to to find the best compromise between ecosystem integration,
mathematical satisfaction, and interface elegance.
Radhouane Jilani. Tangram. Simulation of catheter dynamics for neuroradiology
The mechanical thrombectomy procedure is used to treat ischemic strokes. This method employs a long, flexible
catheter that navigates through the vascular system in order to extract blood clots. Building a computer-based
simulator can help clinicians in preparing a patient treatment. Simulation of this procedure involves solving the
constrained dynamics of the catheter. By leveraging its slenderness, the Cosserat theory can be applied. It models
the catheter as a one-dimensional oriented curve that evolves according to a system of coupled non-linear partial
differential equations. This approach is faster than finite element methods, while maintaining high accuracy.
Cosserat dynamics can be solved using staggered grid discretization, but the resulting equations may become stiff
and the constrained motion may exhibit unrealistic vibratory behaviors. We adopted a continuous approach by
treating the dynamic system as a boundary value problem, and solved it using an orthogonal collocation method. Our
method was compared to both the shooting method and discrete element methods on validation tests from the
literature. It shows that our approach provides a good compromise between accuracy and computational efficiency.
Aude Marêché. Adagio. Analyse locale d'une surface discrète par secteurs planaires et application à
l'estimation de la normale
Les applications d'extraction de caractéristiques géométriques nécessitent une
analyse de la surface des objets discrets. Un des aspects de cette analyse peut
être l'étude locale de la planéité du voisinage d'un point sur une surface. Nous
proposons dans ce contexte une approche par découpage d'un voisinage circulaire
non planaire en secteurs angulaires planaires autour d'un point, au moyen
d'algorithmes classiques de reconnaissance de morceaux de plans discrets. Elle
nous permet, grâce à une analyse locale, de mettre en évidence des éléments de
la structure de la surface. Afin de démontrer la pertinence de ces découpages,
nous en dérivons un estimateur de la normale en chaque point d'une surface
discrète, avec des résultats comparables à ceux de l'état de l'art.
MFX Seminar
22 juin, 13:30
A008
Silvia Sellán
University of Toronto
Uncertain Surface Reconstruction
We propose a method to introduce uncertainty to the surface reconstruction problem. Specifically, we introduce a statistical extension of the classic Poisson Surface Reconstruction algorithm for recovering shapes from 3D point clouds. Instead of outputting an implicit function, we represent the reconstructed shape as a modified Gaussian Process, which allows us to conduct statistical queries (e.g., the likelihood of a point in space being on the surface or inside a solid). We show that this perspective improves PSR's integration into the online scanning process, broadens its application realm, and opens the door to other lines of research such as applying task-specific priors.
2022
27 octobre, 14:30
A006
Jean Cardinal
Université libre de Bruxelles
Inapproximability of shortest paths on perfect matching polytopes
We consider the computational problem of finding short paths in the skeleton of the perfect matching polytope of a bipartite graph. We prove that unless P=NP, there is no polynomial-time algorithm that computes a path of constant length between two vertices at distance two of the perfect matching polytope of a bipartite graph. This answers negatively a question posed by Ito, Kakimura, Kamiyama, Kobayashi and Okamoto [SIAM Journal on Discrete Mathematics, 36(2), pp. 1102-1123 (2022)]. Assuming the Exponential Time Hypothesis we prove the stronger result that there exists no polynomial-time algorithm computing a path of length at most (1/2-o(1)) log n / (log log n) between two vertices at distance 2 of the perfect matching polytope of an n-vertex bipartite graph. These results remain true if the bipartite graph is restricted to be of maximum degree 3.
The above has the following interesting implication for the performance of pivot rules for the simplex algorithm on simply-structured combinatorial polytopes: If P≠NP, then for every simplex pivot rule executable in polynomial time and every constant k there exists a linear program on a perfect matching polytope and a starting vertex of the polytope such that the optimal solution can be reached in two monotone steps from the starting vertex, yet the pivot rule will require at least k steps to reach the optimal solution. This result remains true if in addition so-called circuit pivots are allowed.
27 octobre, 15:20
A006
Lionel Pournin
Université Sorbonne Paris-Nord
Distance, strong convexity, flagness, and associahedra
One can always transform a triangulation of a convex polygon into another by performing a sequence of edge flips, which amounts to follow a path in the graph G of the associahedron. The least number of flips required to do so is then a distance in that graph whose estimation is instrumental in a variety of contexts, as for instance in computational biology, in computer science, or in algebraic topology. On the other hand, it is known that paths in G correspond to a certain kind of 3-dimensional triangulation. This talk is about the recent proof that these 3-dimensional triangulations are flag when the corresponding path is a geodesic. This result, that provides a new powerful tool to study the geometry of G, can be thought of as a 3-dimensional analogue of a well-known strong convexity property of G. Several consequences on the computation of distances in G and on strong convexity in related graphs will be discussed. This talk is based on joint work with Zili Wang (Dartmouth College).
10 octobre, 10:00
C103
Douglas Perrin
Harvard Biorobotics Lab
Desktop to bedside: Bespoke application development to translate computer science to clinical tools
In this talk, I will review the process of moving computer science to clinical applications, what I call Desktop-to-Bedside (a play on Bench-to-Bedside), and the challenges faced by conducting software development inside a hospital vs. a more traditional technical environment. I will sketch out a pipeline for Desktop-to-Beside and give a high-level description of several projects, how far they have made it through this pipeline, and their pain points.
10 octobre, 10:00
C103
Peter E. Hammer
Harvard Biorobotics Lab
Quantitative approaches for preoperative planning in aortic valve repair in childrenâ€
Surgical repair of heart valves remains among the most challenging procedures in cardiovascular surgery.Patient-specific computer simulation has been developed as a potential tool to enable surgeons to test proposed valve repair strategies prior to surgery in order to achieve more reliable repairs and better outcomes. Many groups worldwide have been making steady progress developing these image-based computational technologies over decades. Despite this progress, these technologies have not yet matured to the point of clinical utility, leaving surgeons to continue to rely on clinical experience and subjective methods in their day-to-day surgical practice. In this talk, I will present efforts that live in the middle ground between image-based computational surgical planning and the current approach of improvising in the OR based on clinical judgement. These emerging, quantitative methods rely on preoperative or intraoperative measurements of valve features and simple, conceptual models of valve relationships and function.
20 septembre, 14:00
A008
André Nusser
MPI
Fine-Grained Complexity and Algorithm Engineering in Computational Geometry
Computer science as a field aims to better understand the power and limitations of computation but also enables new results in fields that crucially rely on computation. While on first glance it seems that algorithm design should be the central area of research in this regard, progress also very much relies on complexity theory to understand the inherent hardness of certain problems, and an engineering approach, to transfer the theoretical knowledge into practical knowledge that can then be further exploited. In this spirit, we consider three central parts of algorithms research in this talk: algorithm design, (fine-grained) complexity lower bounds, and algorithm engineering. More concretely, we consider results in these areas for various geometric similarity measures like Fréchet distance, Hausdorff distance under translation, and dynamic time warping under translation.
20 septembre, 15:00
A008
Justin Dallant
ULB
Conditional lower bounds for dynamic geometric problems.
Fine-grained computational hardness conjecture and conditional lower bounds on complexity have been a familiar tool in the computational geometer's toolkit ever since the seminal work of Gajentaan and Overmars (1995) introducing 3-SUM hardness. In the data structure world, the study of such polynomial lower bounds for dynamic problems was arguably pioneered (and certainly popularized) by Patrascu in 2010, when he introduced the so-called Multiphase problem. He used this problem as a stepping stone to show that many dynamic problems which seemed hard to solve in subpolynomial time had a good reason to be, as the contrary would violate the 3-SUM conjecture. In this talk I will go over these results of Patrascu connecting the 3-SUM problem to seemingly unrelated dynamic problems. We will then bring it back to geometry by seeing how the Multiphase problem allows us to obtain polynomial lower bounds for a variety of dynamic geometric problems where previously only upper bounds where known. These latter results form the basis of a recent ESA-22 paper which is joint work with John Iacono.
Day of the Department / Journée des doctorants du département
11 juillet 2022, salle A008
- 9:00 - 9:30 Café
- 9:30 - 10:30
-
Nariman Khaledian, Tangram:
Capturing Contact in Mitral Valve Dynamic Closure with Fluid-Structure Interaction Simulation.
Slides
-
David Desobry, Pixel:
Quadrilateral meshing for large deformations.
Slides
-
Ana Rodriguez Cordero, Caramba:
Differential cryptanalysis in block ciphers.
Slides
- 10:30 - 11:00 Break
- 11:00 - 12:00
-
Radhouane Jilani, Tangram :
Simulation prédictive pour la neuroradiologie interventionnelle.
Slides
-
Tom Masini, ABC :
Dimension de Natarajan à marge des PMC.
Slides
-
Bastien Laboureix, Adagio :
Connexité des hyperplans arithmétiques.
Slides
- 12:20 - 13:30 Lunch
- 13:30 - 14:30
-
Yoann Coudert-Osmont, Pixel :
Closed space-filling curves with controlled orientation for 3D printing.
Slides
-
Youssef Assis, Tangram :
Aneurysm detection using deep learning.
Slides
-
Leo Valque, Gamble :
Rounding 3D polygonal meshes.
Slides
Nariman Khaledian, Tangram. Capturing Contact in Mitral Valve Dynamic Closure with Fluid-Structure Interaction Simulation.
Realistic fluid-structure interaction (FSI) simulation of the mitral valve opens the way toward
planning for surgical repair. In the literature, blood leakage is identified by measuring the flow
rate, but detailed information about closure efficiency is missing. We present in this paper an FSI
model that improves the detection of blood leakage by building a map of contact.
Methods -
Our model is based on the immersed boundary method that captures a map of contact and perfect closure of the mitral valve, without the presence of orifice holes, which often appear with existing methods. We also identified important factors influencing convergence issues.
Results -
The method is demonstrated in three typical clinical situations: mitral valve with leakage, bulging, and healthy. In addition to the classical ways of evaluating MV closure, such as stress distribution and flow rate, the contact map provides easy detection of leakage with identification of the sources of leakage and a quality assessment of the closure.
Conclusions -
Our method significantly improves the quality of the simulation and allows the identification of regurgitation as well as a spatial evaluation of the quality of valve closure. Comparably fast simulation, ability to simulate large deformation, and capturing detailed contact are the main aspects of the study.
Ongoing work -
Currently, we are focused on implementing more complex material characteristics in the model to simulate a more realistic behavior of the mitral valve motion. Implementation of real mitral valve geometry based on medical imaging is another ongoing work that we are currently focused on.
David Desobry, Pixel. Quadrilateral meshing for large deformations.
We present a fully automatic method to produce quadrilateral meshes that stay valid despite deformations during a numerical simulation. Our approach is based on the Quadcover algorithm, used in computer graphics to generate high quality quad meshes. Frame field based algorithm like Quadcover are currently not used for simulations of high deformation because the quadrilateral meshes generated have no reason to remain of sufficient quality when we deform their geometry. We explain in this talk how we can take into account deformations that we already computed to produce quad meshes adapted to these coming geometrical changes.
Ana Rodriguez Cordero, Caramba. Differential cryptanalysis in block ciphers
In symmetric cryptography, block ciphers are widely used to secure the exchange of messages between parties. Cryptanalysis is responsible for testing the security of these ciphers to ensure that communications are secure. During this talk we will see what block ciphers are and how to study their security against differential attacks.
Radhouane Jilani, Tangram. Simulation prédictive pour la neuroradiologie interventionnelle.
Les AVC ischémiques tuent chaque année plusieurs millions de personnes dans le monde. La thrombectomie manuelle, consistant à amarrer le caillot sanguin pour déboucher l'artère est l'un des traitements les plus sûrs et les plus efficaces actuels. Elle nécessite la navigation d'un cathéter dans l'arbre vasculaire du patient jusqu'à atteindre le caillot, ce qui est un geste difficile. L'objectif de ma thèse est le développement d'une simulation précise et interactive qui aide à la planification d'une telle opération. Ainsi, il est crucial de modéliser avec précision le cathéter, son environnement et leurs interactions. Le modèle de cathéter de cosserat sera utilisé car il offre le meilleur compromis entre temps de calcul et précision. Les vaisseaux seront modélisés implicitement et un nouvel algorithme de gestion des contacts sera développé. Après une introduction à ma thèse et à l'état de l'art, je présenterai trois simulations qui reproduisent une expérience réelle de déploiement de cathéter.
Tom Masini, ABC. Dimension de Natarajan à marge des PMC.
Nous nous intéressons à l'utilisation des dimensions combinatoires pour majorer la probabilité d'erreur des classifieurs multi-classes à marge. Dans ce contexte, la dimension combinatoire de référence est la gamma-dimension. Cependant, son utilisation pâ tit de l'absence d'un bon résultat structurel. Nous proposons de contourner cette difficulté en substituant à cette dimension la dimension de Natarajan à marge. Pour illustrer le potentiel de cette démarche, nous étudions le cas particulier des perceptrons multi-couches.
Bastien Laboureix, Adagio. Connexité des hyperplans arithmétiques.
Quand vous étiez petits, on vous a dit que les droites étaient continues et n'avaient pas d'épaisseur. Mais, en prenant une image numérique de droite et en zoomant au maximum, on observe plutô t un amas de pixel qu'une forme rectiligne continue. Et si nous étudiions ces droites numériques, discrètes, plutô t que leurs homologues euclidiennes ? Et si nous nous intéressions à une nouvelle géométrie ? Car, au-delà des droites, on peut regarder des segments, des polygones, voire monter en dimension et étudier des plans, des sphères, des hyperplans. Evidemment, la géométrie euclidienne ne s'applique pas sur les pixels : plus de division, adieu les limites, tout n'est plus qu'arithmétique ! Recommençons donc la géométrie depuis le début, via un problème simple sur les plans !
Yoann Coudert-Osmont, Pixel. Closed space-filling curves with controlled orientation for 3D printing.
We explore the optimization of closed space-filling curves under orientation objectives. By solidifying material along the closed curve, solid layers of 3D prints can be manufactured in a single continuous extrusion motion. The control over orientation enables the deposition to align with specific directions in different areas, or to produce a locally uniform distribution of orientations, patterning the solidified volume in a precisely controlled manner. Our optimization framework proceeds in two steps. First, we cast a combinatorial problem, optimizing Hamiltonian cycles within a specially constructed graph. We rely on a stochastic optimization process based on local operators that modify a cycle while preserving its Hamiltonian property. Second, we use the result to initialize a geometric optimizer that improves the smoothness and uniform coverage of the cycle while further optimizing for alignment and orientation objectives.
Youssef Assis, Tangram. Aneurysm detection using deep learning.
The detection of intracranial aneurysms from Magnetic Resonance Angiography (MRA) 3D images is a rapidly growing problem of clinical importance, which is not only time-consuming for radiologists but also extremely difficult to automate. To overcome data scarcity and class imbalance problems related to aneurysm data, state-of-the-art methods are model-centric approaches, which are focused on network architecture and loss functions, but only achieve fair sensitivity with a high false positives rate. Our work mainly follows a data-centric approach, where we exploit the maximum of our weak annotations, a small-patch approach was combined with an adapted data sampling strategy, taking into account the specificities of aneurysms. Moreover, we investigated the impact of various network architectures for aneurysm detection, and compared them using performance metrics that are more relevant to object detection.
Leo Valque, Gamble. Rounding polygonal meshes
To build a 3D mesh, Boolean operations are often used. But the resulting points have rational coordinates and each term of the rational requires greater precision, so you will probably need to round the coordinates of those points. However, rounding can create intersections and corrupt your mesh. I will present our new algorithm to solve this problem in 3D and the first results of its implementation.
8 juillet, 13:30
A006
Boris Bukh
Carnegie Mellon University
Enumeration of interval graphs and d-representable complexes
How many essentially distinct ways are there to arrange n convex sets in Rd?
Here, `essentially distinct' means `with different intersection pattern'. We discuss this question both in the dimension 1, where it amounts to counting the interval graphs, and in higher dimenions. Based on the joint works with Amzi Jeffs.
30 juin, 14:30
A008
Krishna Menon
Chennai Mathematical Institute
A branch statistic for trees
A hyperplane arrangement is a finite collection of affine hyperplanes in Euclidean space. Its regions are the connected components of the complement of these hyperplanes. By a theorem of Zaslavsky, the number of regions of a hyperplane arrangement is the sum of coefficients of its characteristic polynomial. In this talk, we study an important class of arrangements called extended Catalan arrangements. The regions of such arrangements are known to correspond to certain labeled trees. We define a statistic on these trees such that its distribution is given by the coefficients of the characteristic polynomial. We will end by looking at some other arrangements and interpretations of their characteristic polynomial coefficients.
This talk is based on joint work with Priyavrat Deshpande.
Preprint
SÉMINAIRE DU DÉPARTEMENT
proposé par PIXEL
15 avril 10:00
C005
Yuri Nesterenko
Siemens
A concept of a generalized Ginzburg-Landau theory for 3D frame fields design
Spherical harmonics-based formulation of the frame fields design problem has been recently introduced for hexadedral meshing. One of the main difficulties in applying this approach is the need to take into account restrictions corresponding to the hexahedral symmetries. We propose a solution to this problem introducing SO(3)-invariant energy function based on the implicit representation of the manifold of symmetric harmonics and closely related to the well-known Ginzburg-Landau functional. Further, we discuss a concept of a generalized Ginzburg-Landau theory and its possible practical outcome.
COLLOQUIUM DU LORIA,
proposé par GAMBLE
7 avril 2022, 13:30,
Amphi Gilles Kahn
Hugo Parlier, Université du Luxembourg
Playing puzzles on surfaces
la page du colloquium
SÉMINAIRE DU DÉPARTEMENT
proposé par MFX
18 mars 10:30
C005
Géraldine Morin,
Université de Toulouse
Des squelettes aux modèles (2D, 3D, reconstruits ou volumique)... et inversement !
Les représentations par squelette offrent une modélisation d'une forme 2D ou 3D de dimension inférieure : un ensemble de courbes pour les formes 2D, un ensemble de feuillets et de courbes pour les modèles 3D.
Nous proposons tout d'abord un algorithme de d'estimation de squelette 2D efficace et robuste, qui permet de générer un squelette non bruité qui garantit une epsilon-approximation de la forme originale au sens de la distance de Hausdorff. Nous présentons brièvement les travaux en cours pour généraliser cette approche en 3D.
Ensuite, nous verrons comment l'estimation robuste du squelette 2D peut permettre de proposer la reconstruction d'un objet tubulaire composé de branches.
Enfin (least but not last), nous proposons un méthode de génération de modèles volumiques paramétriques continus à partir de squelettes curvilignes 3D. La cohérence de la topologie de l'objet volumique est assurée par l'utilisation des ensembles semi-simploïdaux. Les branches sont jointes aux sommets du squelette de façon continue et toute configuration topologique peut être générée.
SÉMINAIRE DU DÉPARTEMENT
proposé par PIXEL
24 février 10:00
A008
Nicolas Donati,
École polytechnique
Injecting Orientation into Spectral Shape Matching
Given two 3D surfaces, we propose to use intrinsic quantities to compute orientation aware near-isometries between these shapes. Previous techniques focused on matching the functional spaces of the two shapes to retrieve mappings between source and target shape. We shift focus to the vector field spaces of the shapes, also called tangent bundles.
By considering the differentiate of the mapping, and imposing desirable properties on this object we are able to restrict the search space and only retrieve orientation-preserving conformal (angle preserving) maps between surfaces. The key idea to this approach is to represent tangent vector fields as complex field, and notice that if the mapping is conformal, its differentiate becomes a complex-linear map between the tangent bundles.
This idea can be used to upgrade many spectral shape matching algorithms, as well as to transfer vector fields between curved domains simply and accurately.
2021
SÉMINAIRE DU DÉPARTEMENT
proposé par Caramba.
16 décembre 2021, 10h30
A008
Guillaume Moroz,
Gamble
New data structure for univariate polynomial approximation and applications to root isolation, numerical multipoint evaluation, and other problems
We present a new data structure to approximate accurately and efficiently a polynomial f of degree d given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than 1/2 or greater than 2.
22 juin, 14:30
Terrasse
Thomas Fernique,
LIPN
Empilements de sphères de densité maximale
Il est bien connu que pour couvrir la plus grande proportion du plan Euclidien par des disques identiques disjoints, la meilleure façon est de centrer ces disques sur un réseau triangulaire. Ce problème peut être généralisé dans deux directions : en dimensions supérieures ou avec différentes tailles de disques. La première voie a été la plus étudiée (par exemple en dimension 3 avec la preuve de la conjecture de Kepler par Hales et Ferguson en 1998). Ici, on s'intéressera plutôt à la deuxième voie, en particulier le cas de deux tailles de disques.
MFX Seminar
June 14th, 4pm
visio
Silvia Sellán
University of Toronto
A deep dive into Swept Volumes
Abstract:
Given a solid 3D shape and a trajectory of it over time, we compute its swept volume – the union of all points contained within the shape at some moment in time. We consider the representation of the input and output as implicit functions, and lift the problem to 4D spacetime, where we show the problem gains a continuous structure which avoids expensive global searches. We exploit this structure via a continuation method which marches and reconstructs the zero level set of the swept volume, using the temporal dimension to avoid erroneous solutions. We show that, compared to other methods, our approach is not restricted to a limited class of shapes or trajectories, is extremely robust, and its asymptotic complexity is an order lower than standards used in the industry, enabling its use in applications such as modeling, constructive solid geometry, and path planning.
Bio:
Silvia Sellán is a second-year PhD student at the University of Toronto, supervised by professor Alec Jacobson. She has interned twice at Adobe Research and twice at the Fields Institute of Mathematical Sciences, and has published three first-author papers at ACM SIGGRAPH. Her research focuses on specific geometry processing challenges one encounters in data captured from the real world, and geometric data aimed at fabrication.
28 mai
visio
Gilles Simon Tangram, Loria
Jan van Eyck's perspectival system elucidated through computer vision
Days of the Department / Journées des doctorants du département
18 mai -- 17 juin 2021
18-21 mai : https://cnrs.zoom.us/j/97699373100 Meeting ID: 976 9937 3100
Passcode: jAD1vL
14-17 juin : https://cnrs.zoom.us/j/98368582262 Meeting ID: 983 6858 2262
Passcode: 2WvVBD
- Mardi 18 mai, 13h30
- Léo Valque, Gamble :
Rounding polygonal meshes Slides
- Florian Delconte, Adagio :
Tree Defect Segmentation using Geometric Features and CNN Slides
- Mercredi 19 mai, 13h30
- David Lopez, Pixel :
Triangulation de Delaunay : comment préserver le volume ? Slides
- Matthieu Zins, Tangram :
"Object-Based Visual Localization From Ellipsoidal Model
and 3D-Aware Ellipse Prediction" Slides
- Jeudi 20 mai, 13h30
- Youssef Assis, Tangram :
Intracranial aneurysms detection using deep learning Slides
- Melike Aydinlilar, MFX :
Ray-Tracing Implicit Surfaces Slides
- Vendredi 21 mai, 13h30
- Marco Freire, MFX :
Layout problems and generative design for shape modeling in
computational fabrication Slides
- David Desobry, Pixel :
Designing 2D and 3D Non-Orthogonal Frame Fields Slides
- Quentin Yang, Caramba :
Coercion resistance with cast-as-intended verification Slides
- Lundi 14 juin, 11h30
- Nariman Khaledian, Tangram :
Toward a Functional Model of the Mitral Valve Slides
- Justine Basselin, Pixel :
Restricted Power diagram on the GPU Slides
- Mardi 15 juin, 11h30
- Abdelkarim Elassam, Tangram :
Horizon Line and Vanishing points detection using Deep Learning Slides
- Louis Massucci, ABC :
Statistical learning theory and hybrid dynamical system identification Slides
- Mercredi 16 juin, 11h30
- Guillaume Coiffier, Pixel :
Improving global parametrizations for quadmeshing Slides
- Nuwan Herath Mudiyanselage, Gamble :
Fast algorithm for the visualization of surfaces Slides
- Jeudi 17 juin, 11h30
- Hamid Boukerrou, Caramba :
Hybrid architecture of LPV dynamical systems in the context of cybersecurity Slides
- Francois Protais, Pixel :
Computing Foldover-free maps Slides; Video
Mardi 18 mai, 13h30
Leo Valque, Gamble. Rounding polygonal meshes
If you are calculating the intersection of two segments, the resulting point has rational coordinates and each term of the rational requires greater precision, so you will probably need to round the coordinates of that point. But rounding can create new intersections and corrupt your model. I will recall the solutions for these problems in 2D and present the idea to solve this problem in 3D.
Florian Delconte, Adagio. Tree Defect Segmentation using Geometric Features and CNN
Estimating the quality of standing trees or roundwood after felling is a crucial step in forest production trading. The on-going revolution in the forest sector resulting from the use of 3D sensors can also contribute to this step. Among them the terrestrial lidar scanning is a reference descriptive method offering the possibility to segment defects. In this paper, we propose a new reproducible method allowing to automatically segment the defects. It is based on the construction of a relief map inspired from a previous strategy and combining with a convolutional neural network to improve the resulting segmentation quality. The proposed method outperforms the previous results and the source code is publicly available with an online demonstration allowing to test the defect detection without any software installation.
Mercredi 19 mai, 13h30
David Lopez, Pixel. Triangulation de Delaunay : comment préserver le volume ?
La triangulation de Delaunay un outil fondamental pour le remaillage 2D. Sous certaines
hypothèses d'échantillonnage, elle peut être étendue aux surfaces
pour générer automatiquement une nouvelle surface triangulée dont les
éléments présentent de bons critères de forme
(i.e. génération de la triangulation de Delaunay restreinte à partir d'un diagramme de Voronoï restreint). Malheureusement, le résultat n'offre pas toujours une bonne approximation géométrique, même lorsque ses sommets sont exactement situés sur la surface d'origine.
Nous proposons ici une étape de post-traitement qui optimise la position de ces sommets afin
de réduire la distance géométrique séparant les deux
représentations. Plus précisément, cette distance est exprimée
à l'aide de volumes signés que nous proposons de compenser localement pour
finalement les annuler à l'échelle de l'échantillonnage.
Cette méthode est particulièrement adaptée pour préserver le volume pendant
l'étape de remaillage d'une mé thode de suivi de surface pour la simulation des fluides.
Matthieu Zins, Tangram. Object-Based Visual Localization From Ellipsoidal Model and 3D-Aware Ellipse Prediction
In our work, we propose a method for initial camera pose estimation from just a single image which is robust to viewing conditions and does not require a detailed model of the scene. This method meets the growing need of easy deployment of robotics or augmented reality applications in any environments, especially those for which no accurate 3D model nor huge amount of ground truth data are available. It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions. Previous works have also shown that abstracting the geometry of a scene of objects by an ellipsoid cloud allows to compute the camera pose accurately enough for various application needs. Though promising, these approaches use the ellipses fitted to the detection bounding boxes as an approximation of the imaged objects. In this work, we go one step further and propose a learning-based method which detects improved elliptic approximations of objects which are coherent with the 3D ellipsoids in terms of perspective projection. Experiments prove that the accuracy of the computed pose significantly increases thanks to our method. This is achieved with very little effort in terms of training data acquisition -- a few hundred calibrated images of which only three need manual object annotation.
Jeudi 20 mai, 13h30
Youssef Assis, Tangram. Intracranial aneurysms detection using deep learning
The detection of intracranial aneurysms from Magnetic Resonance Angiography (MRA) images is a problem of rapidly growing clinical importance, that is time consuming but also extremely challenging to automate. Recently, the integration of deep neural networks in the medical field has instigated a streak of methods that have convincingly shown promising performance. However they achieve insufficient specificity, and have fairly high false positive rates. Therefore, they cannot be used in real world scenarios yet.
Medical image data are limited in number, difficult to annotate and aneurysms are very scarce and small structures. This results in a small data, high class imbalance problem. My talk will describe our proposed data management strategy, made of rough annotation, small patch-based approach and adapted sampling, and report on the detection performance obtained with a vanilla U-net architecture. I will also present our preliminary investigation in more advanced network architecture to further improve these results and ultimately obtain an effective system which assists radiologists in their daily routine.
Melike Aydinlilar, MFX. Ray-Tracing Implicit Surfaces
Implicit surfaces provide many advantages in terms of creating smooth, blending shapes. However rendering these surfaces is not straight-forward. Using ray-tracing allows us to create high quality images with arbitrary resolution without resorting to tessellating the surfaces. We propose efficient methods for ray tracing implicit surfaces using guided interval analysis and numerical root finding methods using low degree polynomials in order to employ these methods for additive manufacturing.
Vendred 21 mai, 13h30
Marco Freire, MFX. Layout problems and generative design for shape modeling in computational fabrication
Layout problems appear in many areas of engineering and computer science. Typically, a layout problem requires to spatially arrange and interconnect a number of geometric elements in a domain. The problem definition also includes a set of constraints, which must be respected in order to produce a valid layout. In this presentation I will showcase my work on two specific problems: electronic circuit layouts and supports for additive manufacturing.
David Desobry, Pixel. Designing 2D and 3D Non-Orthogonal Frame Fields
We present a method for direction fields design on surface and volumetric meshes supporting non-orthogonality. Our approach is a generalization of the representation of 3D cross fields in spherical harmonic basis. As such it induces a geometrically meaningful measure of smoothness, allows orthogonality control by a simple parameter and enables orientation constraints of a single direction. To the best of our knowledge this is the first work to propose non-orthogonal 3D frame fields design. We demonstrate the applicability of our method to generate anisotropic quadrangular and hexahedral meshes which are particularly useful for remeshing CAD models.
Quentin Yang, Caramba. Coercion resistance with cast-as-intended verification.
In electronic voting, two major properties are sought after: verifiability and privacy. A voting protocol is private if no one can learn how you vote, and it is verifiable if the result is guaranteed to be correct with respect to the voters' intention and the tallying process. While they are easy to understand, those properties have some advanced refinements. Coercion resistance is the ability to resist to a coercer who wants to buy a vote with either a reward or a threat. In a coercion-resistance protocol, even if a voter wants to prove that they voted in a certain way, they should not be able to do so. On the other hand, cast-as-intended verification is a process which allows a voter to be sure that their voter has been cast (by their voting device) according to their will, even if the voting device is malicious. It typically provides the voter with a return code which tells them how they actually voted. It is straightforward to notice that both properties seem incompatible: how can we provide a proof that you voted correctly without allowing you to use this prove before a coercer? In our work, we consider a multi-party protocol on the voter side, assuming that the voters possess two independant devices (for instance a smartphone and a laptop). Assuming that at least one of the devices is honest, we give a simple solution for the cast-as-intended verification, and we incorporate this into a coercion-resistance protocol of our conception.
Lundi 14 juin, 11h30
Nariman KHALEDIAN, Tangram. Toward a Functional Model of the Mitral Valve
The mitral valve (MV) of the heart is composed of leaflets that are maintained by chordae during peak systole, which ensures one-way flow of oxygenated blood from the left atrium to the left ventricle. For MV with pathological disease, numerical model can be used to predict the result of the surgical procedures. As a continuation to previous work on MV geometry extraction in TANGRAM team, the focus in this PhD thesis is modeling the behavior of MV based on real data geometry with the aim to mimic the dynamic behavior of real MV. In the multiphysics numerical analysis, capturing the contact between leaflets in real data is a challenging task. Due to the complex nature of the nonlinear problem, different CFD (Computational Fluid Dynamics) approaches were investigated. We started with structural analysis to capture the contact between the leaflets. For the Fluid-structure interaction (FSI) model, the main challenge is the mesh deformation in physical domain when contact occurs. To address this issue, four different approaches were investigated in parallel, utilizing Immersed Boundary (IB) method for the numerical domain, Smooth Particle Hydrodynamics (SPH), Overset technique (based on Arbitrary Lagrangian-Eulerian method, ALE), and Porous medium (also based on ALE). Currently, by implementing ALE method, we were able to successfully capture the contact in MV leaflets and Porous medium technique was used to seal the gap between the leaflets during contact. In the future, we will try to collect experimental data and compare the behavior of the MV with the numerical analysis, also, optimizing the computation through deep learning and using the numerical simulation to optimize the valve geometry architecture. The end goal is moving toward a functional model of the Mitral Valve which is accurate enough for clinical cases.
Justine Basselin, Pixel. Restricted Power diagram on the GPU.
Voronoi diagrams and power diagrams are widely studied and used for their interesting mathematical properties. In particular, they allow the discretization of space for the evaluation of integrals in applications such as the simulation of incompressible fluids or direct and inverse problems in cosmology. In this context, the evaluation of these diagrams is often a bottleneck, as they are needed in a critical part of the optimization.
Despite the recent development of methods to generate Voronoi diagrams on GPUs, computing integrals on restricted power diagrams remains a challenge. The restriction to a complex domain of simulation is difficult and has important consequences on the computation time. We propose a robust and extremely fast solution on the GPU, to independently evaluate integrals on each cell of a power diagram restricted by a triangulation of a domain. By using a scheduling strategy and a new integral evaluation method, we achieve a threefold reduction in computation time compared to recent GPU algorithms and a reduction of 50 times for CPU methods.
Mardi 15 juin, 11h30
Abdelkarim Elassam, Tangram. Horizon Line and Vanishing points detection using Deep Learning.
We introduce a novel approach to jointly estimate the horizon line (HL) and detect vanishing points (VPs) from a single RGB image. Existing methods are typically based on extracting converging lines and clustering them to detect vanishing points, then estimating the horizon line from the detected VPs. Our method is based on a convolutional neural network (CNN) to directly estimate the horizon line and detect VPs in man-made environments. The key element of our approach is to use a cascading model where we breakdown the HL-VPs detection into sequential phases. First, we estimate the slope of the HL, then the offset and finally the location of the VPs on the HL. In each phase, we estimate one parameter while taking into consideration the previous ones. For each parameter, we estimate a probability distribution over possible horizon lines and VPs in the image and extract the best score of each parameter. Moreover, our method can detect multiple VPs using the A-Contrario method and isn’t constrained to a Manhattan-world assumption. Finally, we introduce the use of additional features such as normal maps to improve the performance of our approach. The proposed model is trained on the new dataset Holicity, and we achieve state-of-the-art results on the challenging Horizon Lines in the Wild (HLW) dataset and competitive results on two benchmark datasets.
Louis Massucci, ABC. Statistical learning theory and hybrid dynamical system identification
In this talk, we will discuss how statistical learning theory can help to estimate models of hybrid dynamical systems. These systems, encountered in the field of automatic control, are dynamical systems that switch between different operating modes. Their identification, i.e., the estimation of a model of their input-output behavior, is a nontrivial learning problem, which raises several issues for statistical learning theory. Nonetheless, we will see how these can be addressed to derive statistical guarantees on the model accuracy and how these can be used for model selection and the estimation of the number of modes.
Mercredi 16 juin, 11h30
Guillaume Coiffier, Pixel. Improving global parametrizations for quadmeshing
In computer graphics, a parametrization is a mapping from a surfacic (resp. volumic) mesh into the euclidean 2D (resp. 3D) space. Parametrizations have been widely studied throughout the last decades as they constitute a key step of various applications like texture mapping, surface fitting or remeshing. As of today, uncontrained parametrizations of topological disk domains are easy to obtain, but many applications still require additional properties.
Our work focuses on the production of robust parametrizations for quad and hex meshing, where the challenges are two fold. Firstly, we need to be able to parametrize domains of arbitrary topological shapes, mainly through the smart placement of singular vertices and cuts. Secondly, parametrizations for quad and hex meshing are destined to be quantized to integer values, with constraints on the boundary as well as low area distortion over the domain.
Nuwan Herath Mudiyanselage, Gamble. Fast algorithm for the visualization of surfaces
The visualization of surfaces defined by an implicit equation of the form f(x,y,z) = 0 is possible thanks to algorithms such as marching cubes. They rely on the evaluation of the function f on a grid of points. We have been working on speeding up the process for polynomial surfaces of high degrees using multipoint evaluation. We are also aiming at giving some guarantees on the constructed discrete surface.
Jeudi 17 juin, 11h30
Hamid Boukerrou, Caramba. Hybrid architecture of LPV dynamicalsystems in the context of cybersecurity
In this talk, we propose an hybrid architecture involving LPV dynamical systems for encryption purpose, having in mind the context of cybersecurity. Such an hybrid architecture is motivated by the fact that it is a natural model, recast in a control-theoretic framework, of a so-called statistical self-synchronizing stream cipher. It is shown that flatness is central to guarantee the necessary synchronization between the cipher and the decipher. In this context, beyond synchronization, security must be a constraint to be taken into account as well. We especially focus on diffusion as a security criterion. The hybrid architecture is motivated to take advantage of both properties simultaneously.
Francois Protais, Pixel. Computing Foldover-free maps
Mapping a triangulated surface to 2D space (or a tetrahedral mesh to 3D space) is an important problem in geometry processing. In computational physics, untangling plays an important role in mesh generation: it takes a mesh as an input, and moves the vertices to get rid of foldovers. In fact, mesh untangling can be considered as a special case of mapping where the geometry of the object is to be defined in the map space and the geometric domain is not explicit, supposing that each element is regular. In this paper, we propose a mapping method inspired by the untangling problem and compare its performance to the state of the art. The main advantage of our method is that the untangling aims at producing locally injective maps, which is the major challenge of mapping. In practice, our method produces locally injective maps in very difficult settings, and with less distortion than the previous work, both in 2D and 3D. We demonstrate it on a large reference database as well as on more difficult stress tests.
2020, Covid pandemy
2019
December 11th, 11:00
Room A008
Benedikt Kolbe,
INRIA Nancy Grand Est
Structures in three-dimensional Euclidean space from hyperbolic tilings
A main focus of modern crystallography is to explore the systematic enumeration and construction of nets in Euclidean space. A recent approach (EPINET) attempts to enumerate crystalline frameworks that arise as structures derived from graphs on triply periodic minimal surfaces like the gyroid, the primitive and the diamond minimal surface. We imagine drawing lines on such a surface. Most of us are pretty lazy, so we most likely manage only a small doodle. However, what if the drawing for the rest of the surface can be filled in by invoking symmetries? In this talk, we will explain recent results that provide a complete classification and tools for the enumeration of all ways of drawing on a hyperbolic surface a given combinatorial structure invariant under an arbitrary symmetry group.
In the first part of my talk I will motivate the approach and provide some background. The second part will focus on some of the results from my PhD at the Technical University of Berlin that make possible the enumeration of crystallographic structures on triply periodic surfaces as well as implementations.
There are tie-ins to geometry, braid theory, combinatorial group and tiling theory, and even some physics and chemistry.
October 22nd, 10:00
Room C009
Josiane Zerubia,
INRIA Sophia Antipolis - Méditerannée
Marked Point Processes for Object Detection and Tracking in High Resolution Images: Application to Remote Sensing Data
In this talk, we combine the methods from probability theory and stochastic geometry to put forward new solutions to the multiple object detection and tracking problem in high resolution remotely sensed image sequences. First, we present a spatial marked point process model to detect a pre-defined class of objects based on their visual and geometric characteristics. Then, we extend this model to the temporal domain and create a framework based on spatio-temporal marked point process models to jointly detect and track multiple objects in image sequences. We propose the use of simple parametric shapes to describe the appearance of these objects. We build new, dedicated energy based models consisting of several terms that take into account both the image evidence and physical constraints such as object dynamics, track persistence and mutual exclusion. We construct a suitable optimization scheme that allows us to find strong local minima of the proposed highly non-convex energy.
As the simulation of such models comes with a high computational cost, we turn our attention to the recent filter implementations for multiple objects tracking, which are known to be less computationally expensive. We propose a hybrid sampler by combining the Kalman filter with the standard Reversible Jump MCMC. High performance computing techniques are also used to increase the computational efficiency of our method. We provide an in-depth analysis of the proposed framework based on standard multiple object tracking metrics and computational efficiency. This analysis yields a very good detection and tracking performance at the price of an increased complexity of the models. Exhaustive tests have been conducted on various high resolution satellite image sequences.
In collaboration with Dr. Paula Craciun during her PhD at INRIA-SAM and Dr. Mathias Ortner from Airbus DS
11 Octobre, 10:30
Salle C103
Ioana Ilea,
Université Technique de Cluj-Napoca
Classification robuste sur l'espace des matrices de covariance. Application à la texture.
Au cours de ces dernières années, les matrices de covariance ont montré leur intérêt dans de nombreuses applications en traitement du signal et de l'image. Les travaux présentés se concentrent sur l'utilisation de ces matrices comme descripteurs pour la classification. Dans ce contexte, des algorithmes robustes de classification sont proposés en développant les aspects suivants.
Tout d'abord, des estimateurs robustes de la matrice de covariance sont utilisés afin de réduire l'impact des observations aberrantes. Puis, la distribution Riemannienne Gaussienne, ainsi que l'extension au cas des modèles de mélange, sont considérées pour la modélisation des matrices de covariance. De plus, un nouvel estimateur du centroïde est proposé en s'appuyant sur la théorie des M-estimateurs : l'estimateur de Huber. En outre, des descripteurs appelés vecteurs de Fisher dans l'espace des matrices de covaraiance sont introduits afin de modéliser les images non-stationnaires. Enfin, un test d'hypothèse basé sur la distance géodésique est introduit pour réguler la probabilité de fausse alarme du classifieur. Toutes ces contributions sont validées en classification d'images de texture, de signaux du cerveau, et d'images polarimétriques radar simulées et réelles.
September 24th, 10:00
Room C103
Douglas Perrin,
Harvard Medical School
Patient-Specific Mitral Valve Segmentation from 4D Ultrasound
Successful clinical treatment of mitral valve disease is dependent on understanding the complexities of patient-specific valve behavior. Mechanical models of the mitral valve, designed to predict valve closure and generated using patient-specific measurements, have proven to be useful tools for studying valve behavior. To be clinically feasible, these models need to come from ultrasound images.
I will be presenting work done in collaboration with Dr. Rob Schneider during his PhD thesis. In this work, we developed a method for generating patient-specific mitral valve model (annulus, leaflets, state) from three-dimensional ultrasound. I will describe our automated approach for initial demarcation of the mitral valve annulus, which we compared to the results of manual delineations made by experts. Using a valve state predictor, we developed and a constrained optical flow algorithm we segment the annulus in the remaining frames in the ultrasound sequence. The location of the leaflets in the image-frame just before valve closure is then automatically found by using the previously delineated annulus to bound the segmentation. Using active surface model, the leaflet geometry, and four-dimensional (4D) annulus the leaflets are tracked during valve closure. This enabled, for the first time, the segmentation of an accurate and detailed mitral leaflet coaptation region.
MFX Seminar
September 20th, 14:00
Room B013
Michal Piovarci ,
Università della Svizzera italiana
Perception aware fabrication
Faithfully reproducing real-world objects on a 3D printer is a challenging endeavor. A large number of available materials and freedom in material deposition make efficient exploration of the printing space difficult. Furthermore, current 3D printers can perfectly capture only a small amount of objects from the real world which makes high-quality reproductions challenging. Interestingly, many of the applications for 3D printing are explored by humans either using our sense of touch, sight, or hearing. These senses have inborn limitations given by biological constraints: eyes have limited capability to distinguish high-frequency information; fingers feel applied forces in a non-linear fashion.
In this talk, I will introduce you to the concept of perception-aware fabrication that aims to exploit these limitations to create more efficient fabrication techniques which offer equal or higher quality as perceived by a human observer. A core element of perception-aware fabrication is a perceptual space which models the response of the human sensory system to external stimuli. I will show you how to derive such spaces and how to apply them in the context of computational fabrication. Finally, I will show you how we can leverage perceptual insights to design more efficient numerical simulations. I will demonstrate this general concept in the context of two applications: manufacturing objects with prescribed compliance properties, and designing customized digital styli that mimic the behavior of traditional drawing tools. Last but not least, I will present a technique for an efficient design of surfaces with prescribed reflectance behavior.
Bio:
Michal Piovarci is a Ph.D. student at USI Lugano in the Perception, Display, and Fabrication Group led by Piotr Didyk. He received his master’s degree in computer science from Comenius University in Bratislava in 2014. In 2015 he started an internship at Max-Plank Institut fur Informatics in Germany and later continued as a Ph.D. student first at MPI, and at USI Lugano. His work lies in the cross-section of fabrication, perception, and numerical simulation. The main objective is to leverage the limitations of the human sensorial system to design more efficient and more intuitive fabrication methods.
MFX Seminar
September 19th, 16:00
Room A008
Tim Kuipers ,
TU Delft
CrossFill: Foam Structures with Graded Density for Continuous Material Extrusion
Tim Kuipers is a software engineer working on path planning for 3D printing at Ultimaker, a manufacturer of desktop / concrete floor FDM 3D printers.
He started a PhD trajectory 2 years ago at the Delft University of Technology which is aimed on physicalizing heterogeneous model properties - everywhere from varying surface colors to varying stiffness throughout the print.
On Thursday 19 September he will present his latest work "CrossFill: Foam Structures with Graded Density for Continuous Material Extrusion" which utilizes a space-filling surface to enforce continuous extrusion of print paths along each layer.
July 1st, 14:00
Room C103
Pete Hammer,
Boston
Computational fluid dynamics to guide care of patients with congenital heart disease
Computational fluid dynamics (CFD) refers to use of computers to numerically approximate the Navier-Stokes equations, which are the basic conservation laws that govern fluid flow. This methodology is well established in the design of mechanical systems and has more recently been used to study biological flows in the human body. In this talk, I will describe recent work by my group using CFD to better understand blood flow in patients with various forms of congenital heart disease, including single-ventricle defects, anomalous coronary arteries, and pulmonary vein stenosis . I will give examples of both disease-specific studies, where the goal is generalized treatment guidelines, and patient-specific CFD studies, where results can directly inform patient care.​
July 9th,
Room A006
Baogang Hu,
LIAMA/NLPR, Beijing, China
Information Theoretic Learning (ITL) in Pattern Classification
When ITL (termed by Principe, et al. 2000) criteria have played more
roles in the study of machine learning, we are still puzzled by their
successes and limitations in applications, or their connections to the
empirical learning criteria. This talk will focus on ITL in the study of
pattern classification. The connections between the two sets of learning
criteria are presented in a binary classification. I will introduce the
novel theory of abstaining learning for both Bayesian classifiers and
mutual information classifiers, and the cost-free learning from the
real-world data sets in comparison with cost-sensitive learning. The
talk will show that ITL will provide us a fundamental for understanding
the learning mechanisms.
Speaker:
Baogang Hu received his Ph.D. degree in 1993 from Department of Mechanical Engineering, McMaster University, Canada. Currently, he is a professor of National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China. From 2000 to 2005, he served as the Chinese Director of “Chinese-French Laboratory of Information, Automation and Applied Mathematicsâ€(LIAMA) . His current researches include machine learning and plant growth modeling.
Day of Department / Journée des doctorants du département
25 mai 2019, 9h-14h30
Salle A008
- 9h30 Jimmy Etienne, MFX : Curved slicing for additive manufacturing
- 9h50 Gabrielle De Micheli, Caramba : Recovering ECDSA cryptographic keys from partial information
- 10h10 Charles Duménil, Gamble : Expected size of the Delaunay triangulation of random points on a surface
- 10h30 Pause
- 11h Aude Le Gluher, Caramba : How much time does it take to factor an integer?
- 11h20 George Krait, Gamble : Numerical Algorithm for the Topology of Singular Plane Curves
- 11h40 Semyon Efremov & Thibaut Tricard, MFX : Procedural phasor noise
- 13h30 Remi Decelle, Adagio : Measuring wood quality of logs from low-cost sensors at the sawmill or at the road side
- 13h50 Paul Huynh, Caramba : NIST's lightweight cryptography initiative
Jimmy Etienne, MFX: Curved slicing for additive manufacturing
3D printing by local material deposition is becoming increasingly important in companies and homes, but many defects mean that these parts are rarely used outside the prototyping phases. The most common defects are:
- delamination (structural weakness created by the deposition of successive flat layers)
- the staircase effect (visual defect created by the deposition of successive layers on the surface)
The approach envisaged is to use existing equipment in the field (3D printers, robotic arms) and to be able to print objects with better structural and/or visual quality. To this end, we plan to develop "curved" slicing techniques in order to avoid the current techniques of slicing by plane (although there is still room for improvement).
In the literature, the approaches never directly address the problems associated with automatic cutting of curved slices. There are proofs of concepts of 3D printing outside the plane, scheduling algorithms for wire-frame objects, objects cut into different printable areas (still on planes), and others print only the outer surface or a thin layer in a curved manner. The scientific objective of this thesis is to revisit the slicing algorithms by allowing deposits along arbitrary curves, in order to improve the quality of the parts (surface condition and/or mechanical resistance). The usual use of planar slices actually simplifies the problem of slicing. When approaching curved slicing, the addition of a large number of degrees of freedom makes the problem difficult to address. We therefore have a progressive approach, which consists in gradually relaxing the flatness constraints to target a generic method at the end of the thesis.
Gabrielle De Micheli, Caramba: Recovering ECDSA cryptographic keys from partial information
A signature algorithm such as ECDSA allows a person to sign some message in order for the receiver to be guaranteed the message is indeed from the right issuer. The Elliptic Curve Digital Signature Algorithm, known as ECDSA, is such a signing algorithm where each signature on some message requires the use of some nonce k. ECDSA, as well as other cryptographic protocols, are known to leak information which can then be used to recover the secret key of the signer. In this work, we look at different ways to recover the secret key used in ECDSA when partial information about the nonce k is known, in particular some consecutive bits. We explain how key recovery can be derived from some difficult lattice problem.
Charles Duménil, Gamble: Expected size of the Delaunay triangulation of random points on a surface
It is well known that a planar triangulation has a linear size. This result is due to the Euler characteristic. In dimension 3 or higher, we can find triangulations that have a size of Θ(n2).
We compute the size of the triangulation when the points are distributed on surface. For instance, on a cylinder, it is proved that you can have a bad size, O(n√n), even with a good deterministic sample. But if the points are distributed at random, then a tight bound Θ(n ln n) is reached in expectation. We try to adapt those results on generic surfaces where good samples lead to a triangulation in O(n ln n), and where we expect a linear expected size.
Aude Le Gluher, Caramba: How much time does it take to factor an integer?
The security of some asymetric cryptography relies on the fact that it takes a prohibitive amount of time to factor large enough integers. But how much is "large enough" ? In order to answer the question, I study a factorizing method: the number field sieve (NFS). The goal of my thesis is to get the best estimations possible for its computing time, both from a theoretical and a practical viewpoint. In this talk, I will present a method I designed to assess the asymptotic complexity of NFS.
George Krait, Gamble: Numerical Algorithm for the Topology of Singular Plane Curves
We are interested in computing the topology of plane singular curves. For this, the singular points must be isolated. Numerical methods for isolating singular points are efficient but not certified in general. We are interested in developing certified numerical algorithms for isolating the singularities. In order to do so, we restrict our attention to the special case of plane curves that are projections of smooth curves in higher dimensions. In this setting, we show that the singularities can be encoded by a regular square system whose isolation can be certified by numerical methods. This type of curves appears naturally in robotics applications and scientific visualization.
Semyon Efremov & Thibaut Tricard, MFX: Procedural phasor noise
Procedural pattern synthesis is a fundamental tool of Computer Graphics, ubiquitous in games and special effects. By calling a single procedure in every pixel -- or voxel -- large quantities of details are generated at low cost, enhancing textures, producing complex structures within and along surfaces. Such procedures are typically implemented as pixel shaders.
We propose a novel procedural pattern synthesis technique that exhibits desirable properties for modeling highly contrasted patterns that are especially well suited to produce surface and microstructure details. In particular, our synthesizer affords for a precise control over the profile, orientation and distribution of the produced stochastic patterns, while allowing to grade all these parameters spatially. Our technique defines a stochastic smooth phase field (a phasor noise) that is then fed into a periodic function (e.g. a sine wave), producing an oscillating field with prescribed main frequencies and preserved contrast oscillations. In addition, the profile of each oscillation is directly controllable (e.g. sine wave, sawtooth, rectangular or any 1D profile). Our technique builds upon a reformulation of Gabor noise in terms of a phasor field that affords for a clear separation between local intensity and phase.
Remi Decelle, Adagio: Measuring wood quality of logs from low-cost sensors at the sawmill or at the road side
The thesis focuses on the estimation of wood quality. The analysis of X-ray computer tomography (CT) images has been extensively studied over the past 30 years and more recently. We are interested here in the analysis of RGB images taken by digital cameras (cameras or smartphones). We only have images of log ends. In the literature, few studies have been carried out for the processing of such images.
The quality of the wood determines its use and price. This quality also influences the physical properties of the log. Several elements highlight this quality, such as the difference between the position of the marrow and the geometric centre of the log or the average thickness of the rings.
Among the features mentioned, an important feature to detect is the pith. Few studies have already been focused on this work on such images. However, these studies make some assumptions, a manual segmentation of the log must be done beforehand or the log is centered in the image. Our first work was to develop an algorithm to locate the pith without these assumptions. Our current work is to study the gap between the pith and the geometric centre of the log (a quality indicator). To do this, we look at the segmentation tools.
Paul Huynh, Caramba: NIST's lightweight cryptography initiative
Driven by the need for cryptographic primitives that can run on devices with very low computing power, small memory and limited power supply, a process to evaluate and standardize these so-called lightweight cryptographic algorithms was recently initiated by the Nation Institute of Standards and Technology (NIST). This presentation intends to give an overview of this initiative and will briefly introduce our submission, "Lilliput-AE".
23 mai 2019, 10h30h
Salle Jean Legras
Robin Larrieu ,
LIX
Fast polynomial reduction for generic bivariate ideals
Let A, B in K[X,Y] be two bivariate polynomials over an effective field K, and let G be the reduced Gröbner basis of the ideal I := <A,B> generated by A and B, with respect to some weighted-degree lexicographic order. Under some genericity assumptions on A,B, we will see how to reduce a polynomial with respect to G with quasi-optimal complexity, despite the fact that G is much larger than the intrinsic complexity of the problem. For instance, if A,B have total degree n, that is O(n2) coefficients, then G has O(n3) coefficients but reduction can be done in time O(n2).
We will consider two situations where these new ideas apply, leading to different algorithms:
- First, there is a class called "vanilla Gröbner bases" for which there is a so-called terse representation that, once precomputed, allows to reduce any polynomial P in time O(n2). In this setting, assuming suitable precomputation, multiplication and change of basis can therefore be done in time O(n2) in the quotient algebra K[X,Y] / <A,B>.
- Then, we assume that A and B are given in total degree and we consider the usual degree lexicographic order. Although the bases are not vanilla in this case, they admit a so-called concise representation with similar properties. Actually, the precomputation can also be done efficiently in this particular setting: from the input A,B, one can compute a Gröbner basis in concise representation in time O(n2). As a consequence, multiplication in K[X,Y] / <A,B> can be done in time O(n2) including the cost of precomputation.
2018
SÉMINAIRE DU DÉPARTEMENT
proposé par MFX.
29 novembre 2018, 10h00
Ampgi Gilles Kahn (C)
Jean-Baptiste Labrune,
Bell Labs
Radical Design: augmented environments for building dynamic objects with active matter
Programmable matter, also called active matter, is a class of physical and biological materials that allow new kind of affordances and interactions (Tibbits, Active Matter, MIT Press, 2017). Static entities become dynamic and can shapeshift, turn elastic or solid on demand. Active matter is still a prototype in laboratories, hence not easily available nor simple to use for non-experts.
However, like VLSI opened the field for advanced software engineering, technologies such as 3D / 4D printing and bio/nano-assemblers will probably popularize them soon. In this new context, how will we program but also design with these new materials? What will be the appropriate tools and interaction paradigms to build this world that Ivan Sutherland defined as "ultimate" ?
On the basis of my research as PhD at INRIA and postdoc at the MIT Medialab, i will show some initial answers to these questions. I will detail in particular how the vision of Radical Atoms (Ishii, Labrune & al 2012) proposes a new kind of paradigm called Radical Design, connecting biology, material science, computational modelling of microstructures, HCI and design.
Jean-Baptiste Labrune is a designer & researcher, specialized in the study of creative places and processes. His researches focus on the notion of "Exaptation", the way in which users of technology reconfigure and hack it, producing original and unexpected functions and uses. He completed his Phd at INRIA and postdoc at MIT, then became researcher at Bell Labs and interaction design professor at ENSAD (Arts Décos School). He organizes and lead many "hybrid" workshops in art & sciences places in France (Arts Décos, Beaux-Arts, Palais de Tokyo, Mains d'Oeuvres) & internationally (Mediamatic, Interaction Design Institute Ivréa, IMAL, Hangar, Hyperwerk, Akademie Schloss Solitude, MIT Medialab).
COLLOQUIUM DU LORIA, proposé par le département
27 novembre 2018, 13:30,
Amphi Gilles Kahn
Claire Mathieu, École Normale Spérieure
Stable Matching in Practice
la page du colloquium
SÉMINAIRE DU DÉPARTEMENT
proposé par ALICE.
8 novembre 2018, 10h00
Salle Jacques-Louis Lyons (A008)
Etienne Corman,
Carnegie Mellon University
Numerical Representation of Geometry for Shape Deformation
Geometric data are today largely available and accessible due to the
democratization of 3D modelling softwares and low-cost 3D scanning techniques.
Thus, defining a compelling representation in order to compare, measure
similarity or find similar patterns between non-Euclidean data has become one
great challenge in geometry processing. We propose a novel way to capture and
characterize distortion between pairs of shapes by extending the recently
proposed framework of shape differences built on functional maps.
We demonstrate that a set of four operators is complete, capturing intrinsic and
extrinsic structure and fully encoding a shape up to rigid motion in both
discrete and continuous settings.
SÉMINAIRE DU DÉPARTEMENT
proposé par GAMBLE.
8 novembre 2018, 14h00
Salle Jean Legras (A006)
Chee Yap,
New-York University
Subdivision Path Planning in Robotics: Theory and Practice
ABSTRACT:
Motion planning is a fundamental problem in robotics.
We propose to design path planners based on three foundations:
(1) The notion of ``resolution-exact'' planners.
Conceptually, it avoids the zero problem of exact computation.
(2) The use of ``soft predicates'' for achieving such
algorithms in the subdivision approach.
(3) The ``feature-based technique'' for constructing such
soft predicates.
We formulate an algorithmic framework called
``Soft Subdivision Search'' (SSS) that incorporates these ideas.
There are many parallels between our framework and
the well-known Sampling or Probabilistic Roadmap framework.
Both frameworks lead to algorithms that are
- practical
- easy to implement
- flexible and extensible
- with adaptive and local complexity.
In contrast to sampling and previous resolution approaches,
SSS confers strong theoretical guarantees, including halting.
In a series of papers we demonstrated the power
of these ideas, by producing planners for planar robots
with 2, 3 and 4 degrees of freedom (DOF) that outperform or matches
state-of-art sampling-based planners. Most recently,
we produced a planner for two spatial robots (rod and ring)
with 5 DOFs. Non-heuristic planners for such robots has
been considered a challenge for the subdivision approach.
We outline a general axiomatic theory underlying
these results, including subdivision in non-Euclidean configuration spaces,
Joint work with Y.J.Chiang, C.H.Hsu, C.Wang, Z.Luo, B.Zhou, J.P.Ryan.
7 novembre 2018, 10h30
Salle Jean Legras (A008)
Chee Yap,
New-York University
Continuous Amortization and the Complexity of Root Isolation
The problem of isolating all roots (real or complex)
of a polynomial is a highly classical problem.
For integer polynomials, it has been known
for 30 years from the work of Schönhage and Pan
that the bit-complexity of this problem is O(n2 L)
when the polynomial has degree n with L-bit coefficients.
Pan informally calls such bounds "near optimal".
But their algorithm has never been implemented.
Until recently, it was assumed that such near-optimal complexity
requires the divide-and-conquer method of Schönhage.
Meanwhile, subdivision algorithms for root isolation
have proved efficient and widely implemented. But the
worst case complexity of subdivision algorithms always
lagged behind the divide-and-conquer algorithms. In
the last few years, this gap is finally closed for
complex roots [ISSAC'2016]. Moreover, our
near-optimal root isolation algorithm was implemented
this year with good preliminary results.
In this talk, we focus on some techniques that proved useful
in the complexity analysis of subdivision algorithms
-- in particular the idea of "Continuous Amortization".
Based on several papers with
M.Sagraloff, V.Sharma, M.Ruben, J.Xu and M.Burr.
SÉMINAIRE DU DÉPARTEMENT
proposé par GAMBLE.
11 juillet 2018, 14h30
Salle Philippe Flajolet (C005)
Nicolas Delanoue,
Université d'Angers
Calcul par intervalles et transport optimal
Le problème du transport optimal a été
premièrement formalisé par le mathématicien
français Gaspard Monge en 1781. Depuis Kantorovich, ce
problème (généralisé) est
formulé avec la théorie de la mesure et a
été remis sur le devant de la scène par
Cédric Villani (médaille Fields 2010).
Récemment, si la fonction qui apparaît dans le coût est
polynomial, Jean Bernard Lassere et Didier Henrion ont proposé
une méthode de résolution basée sur les moments
des mesures pour reformuler le problème. Leur approche
génère un problème de programmation
linéaire avec contraintes convexes.
En s'appuyant sur le calcul par intervalles, nous proposons une
discrétisation garantie qui génère un programme
linéaire avec contraintes linéaires dont l'optimum est
une borne inférieure de la valeur de l'optimum du
problème de Kantorovich. Durant la présentation, je
rappellerai le problème de Kantorovich et discuterai de notre
approche. Si le temps le permet, je parlerai d'une
discrétisation garantie du dual qui donne une borne
supérieure de l'optimum.
[1] Topics in Optimal Transportation, Cédric Villani, AMS (2003).
[2] Optimal Transportation and Economic Applications, Guillaume Carlier,
link
lundi 18 juin 2018, 9h00)
Salle C103
Robert D. Howe
Harvard Biorobotics Lab
Ultrasound guided heart surgery: Ultrasound image processing, fast motion compensation, catheter robot control.
To treat defects within the heart, surgeons currently use stopped-heart techniques. These procedures are highly invasive and incur a significant risk of neurological impairment. We are developing methods for performing surgery within the heart while it is beating. New real-time 3-D ultrasound imaging allows visualization through the opaque blood pool, but this imaging modality poses difficult image processing challenges due to poor resolution, acoustic artifacts, and data rates of 30 to 40 million voxels per second. To track instruments within the heart we have developed a Radon transform-based algorithm. Implementation using a graphics processor unit (GPU) enables real-time processing of the ultrasound data stream. For manipulation of rapidly moving cardiac tissue we have created a fast robotic device that can track the tissue based on ultrasound image features. This allows the surgeon to interact with the heart as if it was stationary. Our in vitro studies show that this approach enhances dexterity and lowers applied forces. To complete integration of ultrasound imaging with the robotic device we have developed a predictive controller that compensates for the imaging and image processing delays to ensure good tracking performance. We will present applications of this technology in atrial septal defect closure and mitral valve annuloplasty procedures, demonstrating the potential for improved patient outcomes.
mercredi 6 juin 2018, 15h00)
Salle A006
Peter E. Hammer
Harvard Biorobotics Lab
Using engineering and computational methods to inform surgery for congenital heart disease.
Surgeons have traditionally used an approach based on clinical experience and intuition. However, in congenital heart disease, the variety of anatomical abnormality is almost endless, defying an experience-based approach. Analytical approaches based on quantitative analysis and engineering principles have been proposed and show promise to help improve surgical management in highly complex cases. In this talk, examples will be presented to show the use of quantitative approaches to inform surgery, including: (1) characterization of the mechanical properties of cardiovascular tissues and use of this data to guide surgical techniques, (2) simulation to explore novel methods for valve reconstruction in the growing child, (3) lumped element modeling of the circulation to aid in decision-making for complex circulations.
Day of Department / Journée des doctorants du département
25 mai 2018, 9h-12h
Salle Philippe Flajolet (C005)
- 9h30 Vincent Gaudillière, Magrit : Region-based epipolar and planar geometry estimation in low-textured environments
- 9h50 Daryna Panicheva , Magrit : Automatic segmentation of mitral valve chordae
- 10h10 Paul Huynh Caramba : Constructions based on Generalized Feistel Networks for lightweight cryptography
- 10h30 Pause
- 11h George Krait , Gamble : The topology of singular curves and surfaces
- 11h20 Florian Lietard : Evitabilité des k-puissances additives
- 11h40 Simon Masson, Caramba : Pairings in elliptic curves: how to share a secret with many people
Vincent Gaudilliere (Magrit): Region-based epipolar and planar geometry estimation in low-textured environments.
Given two views of the same scene, usual correspondence geometry estimation techniques exploit the well-established effectiveness of keypoint descriptors. However, such features have a hard time in poorly textured man-made environments, possibly containing repetitive patterns and/or specularities, such as industrial settings. It appears therefore necessary to reduce the reliance on visual keypoints, for example by taking advantage of more suitable features (e.g. line segments, vanishing points), or global properties of the environment. In that direction, we propose a novel method for two-view epipolar and planar geometry estimation that first aims at detecting and matching physical vertical planes frequently present in industrial environments, before estimating underlying homographies thanks to three different types of local features. Inferred local correspondences are finally used to improve epipolar geometry estimation.
Daryna Panicheva (Magrit): Automatic segmentation of mitral valve chordae
Mitral valve diseases are common cardiovascular disorders. Treatments are based on surgeon experiences. A promising approach consists in simulating the valve behaviour with a computer-based model during the treatment planning. As most of state-of-art models are simple and generic, our strategy is to develop image-based patient-specific models. It will include the extraction of the valve geometry and topology as well as a biomechanical model.
In this talk, we will present a pipeline allowing the segmentation of the valve chordae from CT scans. Our solution is based on an iterative RANSAC-like method coupled with skeletonization processes. Skeleton calculation provides information about general topology of the structures and RANSAC-like method allows to extract points corresponding to a parametric model, which we define according to chordae anatomy.
Paul Huynh (Caramba): Constructions based on Generalized Feistel Networks for lightweight cryptography
Lightweight cryptography has been an active topic in recent years, driven by the need for primitives that can run on devices with very low computing power. This presentation will focus on lightweight block ciphers and more specifically on Generalized Feistel Networks (GFNs). The diffusion properties of such schemes will be studied using a matrix representation, then a broader class of Feistel networks that are well suited for cryptographic applications will be introduced.
George Krait (Gamble): The topology of singular curves and surfaces
Computing the topology of singular curves and surfaces is a necessary function in many branches of science, including the design of robotic mechanisms and scientific visualization. In this talk, we restrict ourselves to generic types of singular varieties in order to combine the efficiency of the numerical methods in a certified algorithm that computes their topology. The main challenge is to find a square regular system that describes the singular locus. For this goal, we convey the problem to a higher-dimensional space.
Florian Lietard : Evitabilité des k-puissances additives.
En combinatoire des mots, les problèmes d'évitabilité sont étudiés depuis le début du siècle dernier à partir des travaux d'Axel Thue. De façon générale il s'agit de savoir si on peut construire des mots aussi grands qu'on le souhaite en utilisant seulement un nombre fini de lettres et qui évitent certains schémas. En 2014, J. Cassaigne, J. Currie, L. Schaeffer et J. Shallit ont montré qu'il était possible de construire sur un alphabet de quatre chiffres un mot infini ne contenant jamais trois blocs consécutifs de même taille et de même somme. Le but ici est de montrer que cette preuve peut être généralisée. En travaillant à partir de morphismes de mots, les résultats obtenus mènent à des critères et à un algorithme qui permet de décider si oui ou non les points fixes générés permettent d'éviter les cubes additifs.
Simon Masson (Caramba): Pairings in elliptic curves: how to share a secret with many people.
Communications needs to share a common secret in order to use efficient symmetric algorithms such as AES. We present the well-known Diffie-Hellman scheme that permits to share a common key for two people, and its limits when we want to extend it for three people. Finally, we explain how to fix this issue with pairings over elliptic curves.
SÉMINAIRE DU DÉPARTEMENT
proposé par ALICE.
11 mai 2018, 13h15
Salle Jean Legras (A008)
Brian Wyvill,
Université de Victoria (Canada)
Implicit Modelling for Animation
Modelling with triangle meshes has been well supported in both hardware and software for computer animation, but problems arise
with meshes when defining closed volumes and arbitrary topologies. Representing volumes as procedural implicit models makes for
simple collision tests and methods for deformations. An example in modelling is presented for constructing smooth isolated or interconnected 3-D inscribed
volumes. This approach can be used to build porous structures, including volcanic rocks, liquid or dry foam,
and tiny creatures know as radiolarians. In animation, a series of research projects between French and Canadian researchers has led to
an implicit approach to high quality skinning for character animation enhancing existing mesh approaches.
This talk gives a high level view of these projects and shows that implicit modelling is a practical methodology for use in the animation and games industry.
Bio:
Brian Wyvill is a professor at the University of Victoria in Canada. He has been conducting research in computer graphics since the early 1970s
and spent several decades working on implicit modelling, and on the intersection between computing and art. Brian worked on the original Alien movie
and has presented several animated shorts at the SIGGRAPH electronic theatre. He is currently on the board of Eurographics and ACM SIGGRAPH vice-president.
3 avril 2018, 10h00
Salle Bob
Olivier Devillers,
LORIA
Codage compact de triangulation
Alors que les codages classiques (winged edge) utilisent 19n
ou 13 n références pour stocker une triangulation de genre 0
de n points, je présenterai une structure permettant
- avec 4n références de naviguer d'un triangle à l'autre en temps constant
- avec 5n références de, en plus, accéder aux sommets en temps constant
et divers variantes permettant de concilier cette structure compacte avec un
format de compression à 4n bits, de stocker des informations dans les faces
de la triangulation ou dans les coins des triangles.
Il s'agit d'un travail commun avec Luca Castelli Aleardi (LIX)
(Le papier)
ABC Seminar
16 mars 2018, 11h00
Salle Flajolet
Jean-Baptiste Courbot,
INRIA, Paris
Détection, segmentation et poursuite d'objets dans des images
Cette présentation abordera trois contributions dans le domaine du
traitement d'images, correspondant à mes travaux de thèse puis de
post-doctorat. La première problématique est celle de la détection
d'objets très peu lumineux dans des images hyperspectrales
astronomiques pour lesquelles chaque pixel contient une information
résolue sur une portion du spectre lumineux. Dans ce cadre, je
présenterai des tests de type Generalized Likelihood Ratio qui
permettent de spécifier les informations spatiales, spectrales et de
similarité des objets recherchés. Ensuite, je présenterai une
formulation du problème de détection comme un cas particulier de
segmentation d'images par champs de Markov couples. Ce cadre permet
de proposer une segmentation bayésienne non supervisée et robuste à
la présence de bruit extrêmement fort dans les images. J'aborderai
dans une troisième partie mes travaux actuels, qui répondent à une
problématique d'analyse de la dynamique nuageuse dans des séquences
d'images. Ce problème est abordé sous l'angle du filtrage
particulaire multi-objets d'une part, et de la formulation de
problèmes d'optimisation parcimonieuse d'autre part.
14 mars 2018, 14h00
Salle Bob
Odyssée Merveille,
ICube, Strasbourg
RORPO : une méthode morphologique pour l'analyse des structures curvilignes. Applications au filtrage et à la segmentation des vaisseaux sanguins.
L'analyse des structures curvilignes en 3 dimensions est un problème difficile en analyse d'images. En effet, ces structures sont fines, facilement corrompues par le bruit et présentent une géométrie complexe. Depuis plusieurs années, de nombreuses méthodes spécialement dédiées au traitement d'images contenant des structures curvilignes ont vu le jour. Ces méthodes concernent diverses applications en science des matériaux, télédétection ou encore en imagerie médicale. Malgré cela, l'analyse des structures curvilignes demeure une tâche complexe.
Dans cette présentation nous parlerons de la caractérisation des structures curvilignes pour l'analyse d'images. Nous présenterons en premier lieu une nouvelle méthode appelée RORPO, à partir de laquelle deux caractéristiques peuvent être calculées. La première est une caractéristique d'intensité, qui préserve l'intensité des structures curvilignes tout en réduisant celle des autres structures. La deuxième est une caractéristique de direction, qui fournit en chaque point d'une image, la direction d'une structure curviligne potentielle. RORPO, contrairement à la plupart des méthodes de la littérature, est une méthode non locale, non linéaire et mieux adaptées à l'anisotropie intrinsèque des structures curvilignes. Cette méthode repose sur une notion récente de Morphologie Mathématique: les opérateurs par chemins.
RORPO peut directement servir au filtrage d'images contenant des structures curvilignes, afin de spécifiquement les préserver, mais aussi de réduire le bruit. Mais les deux caractéristiques de RORPO peuvent aussi être utilisées comme information a priori sur les structure curvilignes, afin d'être intégrées dans une méthode plus complexe d'analyse d'image. Dans un deuxième temps, nous présenterons ainsi un terme de régularisation destiné à la segmentation variationnelle, utilisant les deux caractéristiques de RORPO. L'information apportée par ces deux caractéristiques permet de régulariser les structures curvilignes seulement dans la direction de leur axe principal. De cette manière, ces structures sont mieux préservées, et certaines structures curvilignes déconnectées par le bruit peuvent aussi être reconnectées.
Des résultats en 2D et 3D de ces méthodes seront enfin présentés sur des images de vaisseaux sanguins provenant de diverses modalités.
COLLOQUIUM DU LORIA, proposé par le département
16 janvier 2018, 13:30,
Amphi Gilles Kahn
Grégoire Allaire, CMAP, École Polytechnique
Taking into account additive manufacturing constraints in topology optimization of structures
la page du colloquium
2017
8 novembre 2017, 11h00
Salle Jacques-Louis Lions
Vicent Despré,
Gamble
Computing the Geometric Intersection Number of Curves.
The geometric intersection number of a curve C on a surface is the
minimal number of self-intersections of any homotopic curve, i.e. of
any curve obtained from C by continuous deformation. We give a
quadratic algorithm to tackle this problem. The algorithm is purely
combinatorial but the proof involves some basic hyperbolic
geometry. All the point is to describe a discrete structure that has
a similar behaviour as the continuous hyperbolic case. I will first
explain what is a hyperbolic surface and why this geometric
structure is useful and then show how we discretized it.
(joint work with Francis Lazarus, GIPSA-Lab)
2 octobre 2017, 9h30
Salle C103
Fabien Pierre,
Magrit
Colorisation de vidéos : de l'état-de-l'art aux débouchés industriels.
La colorisation d'image est un problème extrêmement mal posé mais qui
intéresse l'industrie du divertissement. Ce double point de vue en fait
un sujet très attractif. Dans cet exposé, on présentera l'état de l'art
et les méthodes qui ont été développées par l'orateur pendant sa thèse.
Celles-ci reposent sur des approches non-locales et variationnelles. Les
fonctionnelles utilisées sont non-lisses et non-convexes et ont fait
l'objet de techniques de minimisation originales. Cela a permis
d'implémenter un logiciel expérimental qui associe l'utilisateur à une
approche basée-exemple ce qui donne une méthode efficace, flexible et
rapide. Une extension à la vidéo est proposée, dont l'implémentation en
GPU permet une interactivité de l'approche variationnelle avec
l'utilisateur. Néanmoins, celle-ci n'est pas opérationnelle aux yeux
d'experts du milieu de la colorisation. En vue de se conformer à ces
besoins, quelques pistes seront proposées.
Workshop organized by GAMBLE.
September 25-26 2017,
Room Philippe Flajolet,
10 talks,
Workshop of the associated team "Astonishing"
More info
21 septembre 2017, 10h30h
Salle Jacques-Louis Lyons
Laurent Imbert ,
LIRMM
Randomized Mixed-Radix Scalar Multiplication
Depuis 1996 et les premiers travaux de Kocher sur le sujet, les
éveloppeurs d'algorithmes cryptographiques savent qu'ils doivent
impérativement veiller à protéger leurs implémentations contre les
attaques par canaux cachés. Ces attaques visent aussi bien les
implémentations matérielles que logicielles et s'appliquent aux
algorithmes symmètriques et asymètriques. Aprés un rapide tour
d'horizon de ces attaques, je présenterai un nouvel algorithme de
multiplication scalaire sur courbe elliptique concu pour résister à
la plupart des attaques par observation pour un surcoût inférieur
aux contremesures classiques (ou presque).
6 juin 2017, 14h
Salle C103
Thomas Waite,
Harvard Biorobotics Lab
Cosserat Rods for Modeling Robotic Catheter Systems
Tendon-driven robotic catheters are capable of precise execution of minimally invasive cardiac procedures including ablations and imaging. However, these procedures require accurate modeling of not only the catheter and tendons but also their interactions with surrounding tissue and vasculature. For this reason, mechanically-derived models provide a clear advantage over geometric models, given the ease with which mechanical models handle contact fores and friction. As a solution, we present a fully-mechanical model of a tendon-driven robotic catheter system based on Cosserat rods and integrated with a stable, implicit scheme. We rst validate the physical accuracy of the Cosserat rod as a model for a simple catheter centerline against an analytical model for large deformations and experimental data. We then expand the system by adding a second Cosserat rod to model a single tendon in addition to the catheter, and define the constraints of the tendon-catheter system with penalty forces. We then validate the resulting tendon-catheter system against experimental data to prove its physical accuracy. This model represents a new contribution to the field of robotic catheter modeling in which both the tendons and catheter are modeled by fully-mechanical Cosserat rods and fully validated against experimental data.
Day of Department / Journée des doctorants du département
31 mai 2017, 9h-14h30
Salle Philippe Flajolet
- 9h30 Antoine Fond, Magrit : Joint facade segmentation and registration using Expectation-Maximisation
- 9h50 Charles Dumenil, Gamble : Delaunay triangulation of points evenly distributed on a surface
- 10h10 Svyatoslav Covanov Caramba : Improved method to find optimal formulae for bilinear maps
- 10h30 Pause
- 11h Simon Abelard, Caramba : Complexity bounds for point-counting on hyperelliptic curves
- 11h20 Vincent Gaudillière, Magrit : Visual Place Recognition in Industrial Environment
- 11h40 Iordan Iordanov Gamble : Delaunay triangulations of the Bolza surface
- 13h30 Maxence Reberol, Alice : Evaluating the accuracy of hex-dominant meshes for the finite element method
- 13h50 Florian Lietard :
Evitabilité des cubes additifs en combinatoire des mots.
- 14h30 Pause
- 15h Khadija Musayeva, ABC : Metric Entropy and Rademacher Complexity of Margin Multi-category Classifiers
- 15h20 Raffaella Trivisonne, Magrit : Numerical Simulation using Stochastic Filters for Computer-Assisted Minimally Invasive Endovascular Procedures
Simon Abelard (Caramba) : Complexity bounds for point-counting on hyperelliptic curves
Counting points on (Jacobians of) curves over finite fields and computing their zeta functions are important routines for effective number theorists as well as cryptologists. We propose an algorithm à la Schoof for hyperelliptic curves and study its complexity in large characteristic and high genus. Our algorithm heavily relies on the computation of the r-torsion of the Jacobian for sufficiently many r, which we perform by solving polynomial systems built from Cantor's analogues to division polynomials for hyperelliptic curves. Taking advantage of the very particular structure of such systems, we improve on previous complexity bounds established by Adleman and Huang in 2001.
Svyatoslav Covanov (Caramba) : Improved method to find optimal formulae for bilinear maps.
In 2012, Barbulescu, Detrey, Estibals and Zimmermann proposed a new framework to exhaustively search for optimal formulae for evaluating bilinear maps, such as Strassen or Karatsuba formulae.
The main contribution of this work is a new criterion to aggressively prune useless branches in the exhaustive search, thus leading to the computation of new optimal formulae, in particular for the short product modulo X^5 and the circulant product modulo (X^5 - 1). Moreover, we are able to prove that there is essentially only one optimal decomposition of the product of 3 × 2 by 2 × 3 matrices up to the action of some group of automorphisms.
Seny Diatta (Gamble) : Projection of analytic surfaces
For some robotic problems we need to represent a singular surface that is the projection of a smooth surface embedded in higher dimension.
In this work, we focus on the problem of computing a triangulation of the projection on R^3 of an analytic surface embedded in R^4. 
Based on Transversality and Singularity Classification theories, we first show that, under generic assumptions, the set of singularities of the projected surface are generated by only three types of multi-germs: double points, triple points and cross-caps. Then, using this information we design an algorithm taking as input an analytic surface and returning a triangulation isotopic to its projected surface.
Charles Dumenil (Gamble) : Delaunay triangulation of points evenly distributed on a surface
Reconstructing a surface from a point set is one of the main use of 3D Delaunay triangulation in
real applications. In such applications, by definition the points are not distributed in the whole
3D space but on a surface, thus understanding the expected complexity of the Delaunay triangulation
of a random sample of a surface is an important problem with practical implications.
There are several results concerning the size of the Delaunay triangulation when the point set is a sampling of a surface. These results depends of course of the surface definition and the sampling definition. The surface can be smooth, or only piecewise smooth, it can also be generic if it does not include pieces of circle (no spheres, cones, cylinders...) or not. The sampling can be a good-sampling (no holes nor concentration in the sampling) or random. Current known results are not tight, and the very reasonable case of a random sampling of a surface is not covered in a good way.
Antoine Fond (Magrit) : Joint facade segmentation and registration using Expectation-Maximisation
Due to strong perspective effects and repetitions pose computation in urban environments is a challenging problem where classic approaches fail. However the regularity in geometry and appearance of such scenes can be exploited. We first showed that vanishing points can be efficiently found to overcome viewpoint changes. Then we showed that in rectified image we could roughly detect and identify a building. These two steps can be seen as an initialization step of a pose estimation problem. We propose here to refine this pose by jointly solving semantic segmentation and registration using an Expectation-Maximisation framework.
Vincent Gaudilliere (Magrit) : Visual Place Recognition in Industrial Environment
Visual place recognition is a "well-defined but extremely challenging problem to solve", according to a survey published in 2016 in IEEE Transactions on Robotics (Lowry et al.). Furthermore, performing visual place recognition in industrial environments surely leads to additional challenges. In this presentation, we will underline these issues, and then point out some potential solutions, illustrated by preliminary results.
Iordan Iordanov (Gamble): Delaunay triangulations of the Bolza surface
Delaunay triangulations and their dual Voronoi diagrams are among the most important structures in
Computational Geometry. They are well-studied and many algorithms to efficiently compute them exist. 
However, our knowledge and tools are mainly confined to the Euclidean d-dimensional space.
One extension that has been addressed in previous work is the computation of Delaunay triangulations 
on closed flat manifolds, which can be seen as compact quotient spaces of the Euclidean space by a 
discrete group of isometries. An implementation for the special case of the 3D flat torus exists in CGAL.
In this presentation, I will talk about our current work on the Bolza surface, which is a hyperbolic 
surface homeomorphic to the double torus, and can be seen as a compact quotient space of the hyperbolic 
plane by a discrete group of hyperbolic isometries. I will discuss such triangulations from both mathematical 
and practical viewpoint, and I will present our implementation of an algorithm to construct them. I will
also discuss ideas for our future work on extending our algorithm to more general surfaces.
Florian Lietard : Evitabilité des cubes additifs en combinatoire des mots
En 2011, un article de J. Cassaigne, J. D. Currie, L. Schaeffer et J. Shallit montrait qu'il était possible en utilisant un alphabet de 4 chiffres, de construire un mot infini qui évite les cubes additifs. Autrement dit on ne peut pas trouver dans ce mot trois blocs consécutifs de mêmes tailles et de mêmes sommes de chiffres. Au delà de ce résultat, l'étude de la structure de cette preuve permet d'étendre le travail effectué par Cassaigne et al. et d'émettre plusieurs conjectures sur les mots évitant les cubes additifs.
Khadija Musayeva (ABC) : Metric Entropy and Rademacher Complexity of Margin Multi-category Classifiers
This presentation deals with the generalization performance of margin multi-category classifiers. To derive an upper bound on the probability of error, we adopt a standard approach involving the Rademacher complexity as capacity measure. Then, this complexity is related to a metric entropy by application of the chaining method. At last, the metric entropy is related to fat-shattering dimensions by means of a generalized Sauer-Shelah lemma. In this context, we investigate the optimal choice for the generalized Sauer-Shelah lemma with respect to the two major criteria: the dependencies of the resulting bound on the sample size and the number of categories.
Maxence Reberol (Alice): Evaluating the accuracy of hex-dominant meshes for the finite element method
The finite element method approximates the solutions of partial differential equations with polynomials defined piecewisely on the cells of a mesh. This talk presents a method allowing to compare the approximations obtained with different 3D meshes: tetrahedral, hexahedral and hex-dominant meshes. The main idea is to compute the L^p distance between scalar (or vector) fields using a large and regular sampling which is computed efficiently thanks to graphics hardware. We apply this method to large models and discuss the performance and accuracy of the ap
2016
Day of Department / Journée des doctorants du département
31 mai 2016, 9h-14h30, Programme