FIND ME ON

GitHub

LinkedIn

Deep Hough Voting

🌱

Pasted image 20231211105340.png # End-to-end 1. Feed in point-cloud to Pointnet++ to generate our seeds 1. Pointnet++ subsamples by applying farthest-point sampling 2. Reduces number of points from NN to MM 3. For each point will output the x,y,zx,y,z coordinates along with CC features in a 1×C1\times C feature vector 1. Input: N×3N\times3 2. Output: M×(3+C)M\times(3+C) 2. Take seeds and generate VOTES 1. Each seed’s (i.e. {si}i=1M,si=[xi,fi]\{s_{i}\}_{i=1}^{M},s_{i}=[x_{i},f_{i}]) features, fif_{i} get’s passed into an MLP that outputs the offset ΔxiR3\Delta x_{i}\in\mathbb{R}^3 and a feature offset ΔfiRC\Delta f_{i}\in\mathbb{R}^{C} these are our “votes”. 2. The votes point the seed points in the direction of the “center” of their object 3. Loss is applied here for the spatial offset (i.e. L1L_{1} loss between true Δxi\Delta x_{i} and predicted one) 3. Vote Clustering 1. Here we subsample again, by taking KK votes from the MM ones provided using farthest point sampling. We then use these votes as cluster centroids and cluster our MM votes based on these. 2. We take the KK centroids and pass them through a shared PointNet to propose the bounding boxes along with classification of the class.