| Class and Description |
|---|
| org.opencv.features2d.DescriptorExtractor |
| org.opencv.features2d.FeatureDetector |
| org.bytedeco.javacpp.opencv_core.CvMat
CvMat is now obsolete; consider using Mat instead.
|
| org.bytedeco.javacpp.opencv_core.CvMatND
consider using cv::Mat instead
|
| Method and Description |
|---|
| org.bytedeco.javacpp.opencv_dnn.NormalizeBBoxLayer.acrossSpatial() |
| org.opencv.features2d.FeatureDetector.create(int) |
| org.bytedeco.javacpp.opencv_videoio.cvCaptureFromAVI(BytePointer)
use cvCreateFileCapture() instead
|
| org.bytedeco.javacpp.opencv_videoio.cvCaptureFromCAM(int)
use cvCreateCameraCapture() instead
|
| org.bytedeco.javacpp.opencv_videoio.cvCaptureFromFile(BytePointer)
use cvCreateFileCapture() instead
|
| org.bytedeco.javacpp.opencv_videoio.cvCreateAVIWriter(BytePointer, int, double, opencv_core.CvSize, int)
use cvCreateVideoWriter() instead
|
| org.bytedeco.javacpp.opencv_videoio.cvWriteToAVI(opencv_videoio.CvVideoWriter, opencv_core.IplImage)
use cvWriteFrame() instead
|
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get(double[]) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get(int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get(int, double[]) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get(int, double[], int, int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get(int, int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.get(int, int, int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getByteBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.getByteBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getByteBuffer(int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getDoubleBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.getDoubleBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getDoubleBuffer(int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getFloatBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.getFloatBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getFloatBuffer(int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getIntBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.getIntBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getIntBuffer(int) |
| org.bytedeco.javacpp.opencv_core.Program.getPrefix() |
| org.bytedeco.javacpp.opencv_core.Program.getPrefix(BytePointer) |
| org.bytedeco.javacpp.opencv_core.Program.getPrefix(String) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getShortBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.getShortBuffer() |
| org.bytedeco.javacpp.helper.opencv_core.AbstractArray.getShortBuffer(int) |
| org.bytedeco.javacpp.opencv_core.getThreadNum()
Current implementation doesn't corresponding to this documentation.
The exact meaning of the return value depends on the threading framework used by OpenCV library:
- |
| org.opencv.core.Core.getThreadNum() |
| org.opencv.imgproc.Imgproc.linearPolar(Mat, Mat, Point, double, int) |
| org.bytedeco.javacpp.opencv_imgproc.linearPolar(opencv_core.Mat, opencv_core.Mat, opencv_core.Point2f, double, int)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags)
\internal Transform the source image using the following transformation (See \ref polar_remaps_reference_image "Polar remaps reference image c)"): \f[\begin{array}{l} dst( \rho , \phi ) = src(x,y) \\ dst.size() \leftarrow src.size() \end{array}\f] where \f[\begin{array}{l} I = (dx,dy) = (x - center.x,y - center.y) \\ \rho = Kmag \cdot \texttt{magnitude} (I) ,\\ \phi = angle \cdot \texttt{angle} (I) \end{array}\f] and \f[\begin{array}{l} Kx = src.cols / maxRadius \\ Ky = src.rows / 2\Pi \end{array}\f]
|
| org.bytedeco.javacpp.opencv_text.loadOCRHMMClassifierCNN(BytePointer)
use loadOCRHMMClassifier instead
|
| org.bytedeco.javacpp.opencv_text.loadOCRHMMClassifierNM(BytePointer)
loadOCRHMMClassifier instead
|
| org.opencv.imgproc.Imgproc.logPolar(Mat, Mat, Point, double, int) |
| org.bytedeco.javacpp.opencv_imgproc.logPolar(opencv_core.Mat, opencv_core.Mat, opencv_core.Point2f, double, int)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG);
\internal Transform the source image using the following transformation (See \ref polar_remaps_reference_image "Polar remaps reference image d)"): \f[\begin{array}{l} dst( \rho , \phi ) = src(x,y) \\ dst.size() \leftarrow src.size() \end{array}\f] where \f[\begin{array}{l} I = (dx,dy) = (x - center.x,y - center.y) \\ \rho = M \cdot log_e(\texttt{magnitude} (I)) ,\\ \phi = Kangle \cdot \texttt{angle} (I) \\ \end{array}\f] and \f[\begin{array}{l} M = src.cols / log_e(maxRadius) \\ Kangle = src.rows / 2\Pi \\ \end{array}\f] The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth. |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.put(double...) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.put(int, double...) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.put(int, double) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.put(int, double[], int, int) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.put(int, int, double) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.put(int, int, int, double) |
| org.bytedeco.javacpp.opencv_core.Program.read(BytePointer, BytePointer) |
| org.bytedeco.javacpp.opencv_core.Program.read(String, String) |
| org.bytedeco.javacpp.helper.opencv_core.AbstractCvMat.reset() |
| org.bytedeco.javacpp.opencv_dnn.LSTMLayer.setProduceCellOutput() |
| org.bytedeco.javacpp.opencv_dnn.LSTMLayer.setProduceCellOutput(boolean)
Use flag
use_timestamp_dim in LayerParams.
\brief If this flag is set to true then layer will produce \f$ c_t \f$ as second output.
\details Shape of the second output is the same as first output. |
| org.bytedeco.javacpp.opencv_dnn.LSTMLayer.setUseTimstampsDim() |
| org.bytedeco.javacpp.opencv_dnn.LSTMLayer.setUseTimstampsDim(boolean)
Use flag
produce_cell_output in LayerParams.
\brief Specifies either interpret first dimension of input blob as timestamp dimenion either as sample.
If flag is set to true then shape of input blob will be interpreted as [T, N, [data dims]] where T specifies number of timestamps, N is number of independent streams.
In this case each forward() call will iterate through T timestamps and update layer's state T times.
If flag is set to false then shape of input blob will be interpreted as [N, [data dims]].
In this case each forward() call will make one iteration and produce one timestamp with shape [N, [out dims]]. |
| org.bytedeco.javacpp.opencv_dnn.LSTMLayer.setWeights(opencv_core.Mat, opencv_core.Mat, opencv_core.Mat)
Use LayerParams::blobs instead.
\brief Set trained weights for LSTM layer.
LSTM behavior on each step is defined by current input, previous output, previous cell state and learned weights. Let \f$x_t\f$ be current input, \f$h_t\f$ be current output, \f$c_t\f$ be current state. Than current output and current cell state is computed as follows: \f{eqnarray*}{ h_t &= o_t \odot tanh(c_t), \\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t, \\ \f} where \f$\odot\f$ is per-element multiply operation and \f$i_t, f_t, o_t, g_t\f$ is internal gates that are computed using learned wights. Gates are computed as follows: \f{eqnarray*}{ i_t &= sigmoid&(W_{xi} x_t + W_{hi} h_{t-1} + b_i), \\ f_t &= sigmoid&(W_{xf} x_t + W_{hf} h_{t-1} + b_f), \\ o_t &= sigmoid&(W_{xo} x_t + W_{ho} h_{t-1} + b_o), \\ g_t &= tanh &(W_{xg} x_t + W_{hg} h_{t-1} + b_g), \\ \f} where \f$W_{x?}\f$, \f$W_{h?}\f$ and \f$b_{?}\f$ are learned weights represented as matrices: \f$W_{x?} \in R^{N_h \times N_x}\f$, \f$W_{h?} \in R^{N_h \times N_h}\f$, \f$b_? \in R^{N_h}\f$. For simplicity and performance purposes we use \f$ W_x = [W_{xi}; W_{xf}; W_{xo}, W_{xg}] \f$ (i.e. \f$W_x\f$ is vertical concatenation of \f$ W_{x?} \f$), \f$ W_x \in R^{4N_h \times N_x} \f$. The same for \f$ W_h = [W_{hi}; W_{hf}; W_{ho}, W_{hg}], W_h \in R^{4N_h \times N_h} \f$ and for \f$ b = [b_i; b_f, b_o, b_g]\f$, \f$b \in R^{4N_h} \f$. |
| org.bytedeco.javacpp.opencv_core.Program.source() |
| org.bytedeco.javacpp.opencv_core.Program.write(BytePointer) |
| org.bytedeco.javacpp.opencv_core.Program.write(String) |
Copyright © 2018. All rights reserved.