> Sent: Monday, December 17, 2018 at 10:06 AM
> From: "Jason H" <jh...@gmx.com>
> To: "Samuel Rødal" <sro...@gmail.com>
> Cc: inter...@lists.qt-project.org
> Subject: Re: [Interest] Understanding QImage::transformed()
...
> 
> Thanks Samuel, I was confused by this part for transformed(): "The 
> transformation matrix is internally adjusted to compensate for unwanted 
> translation; i.e. the image produced is the smallest image that contains all 
> the transformed points of the original image. Use the trueMatrix() function 
> to retrieve the actual matrix used for transforming an image." Then 
> trueMatrix() says "This function returns the modified matrix, which maps 
> points correctly from the original image into the new image."
> In which I interpreted the combination of the two as saying "Use trueMatrix() 
> to retrieve the actual matrix for transforming an image without this 
> translation" It seems to me that if I'm doing quad-to-quad, I am 
> intentionally specifying where I want the pixels to end up, rather than 
> having qo query where they ended up after. I believe this is how OpenCV 
> works, with:
> matrix = cv2.getPerspectiveTransform(pts1, pts2)
> 
> Anyway, thanks for the insight! I'll give this ago.

So I've played with this a bit, and still no joy. Using the code you provided 
and my own attempts, never maps those points correctly. Your code looks like 
you understand what I am trying to do, so I communicated that part effectively 
at least.

The points I have a labeled as colors, the quadToQuad call uses these points 
for the toPoly mapping:
QMap<Qt::GlobalColor, QPoint> toPoints { 
  {Qt::yellow, QPoint (540, 0)},
  {Qt::blue, QPoint (1080, 540)},
  {Qt::red, QPoint (540, 1080)},
  {Qt::green, QPoint (0, 540)
};

The after image.transformed() it is correctly oriented, but nothing else is 
correct. The actual points come out to be:

QMap<Qt::GlobalColor, QPoint> resultPoints { // These are estimates via looking 
at an image in GIMP
  {Qt::yellow, QPoint (1620, 300)},
  {Qt::blue, QPoint (2448, 1110)},
  {Qt::red, QPoint (1638, 1917)},
  {Qt::green, QPoint (825, 1104)
};

In theory my out Rect is then (825,300)-(2448,1927), however dx, dy = (1623, 
1627) where I expected it to be (1080, 1080). It's 1.50 times bigger in both 
dimensions than it should be. However even if I have some hypotenuse mistake, 
that would top out 1t 1.42. 

However I can't even get anything remotely close to those points, so I can't 
even do the math to identify the destination rect for extract and scale-down. 
(green.x(), yellow.y())-(blue.x(), red.y())

I kind of understand what you tried to do:
QTransform trueMatrix = QImage::trueMatrix(tx, image.width(), image.height()); 
// not sure why true matrix and image dimensions are needed. from a linear alg 
perspective, a transform is a transform, but whatever...
QPoint delta = trueMatrix.map(tx.inverted().map(QPointF(0,0))).toPoint(); // 
mapping (0,0) thought the inverse of the translation and back through, but this 
won't produce the same coordinate mapping as on the image, so why?

I think the code should look like this (but this doesn't work either):
QImage out = image.transformed(tx, Qt::SmoothTransformation);

QMap<Qt::GlobalColor, QPoint> destinationPoints {  // should map to resultPoints
  {Qt::yellow, tx.map(toPoints[Qt::yellow])}, // QPoint(640,252), wrong
  {Qt::blue, tx.map(toPoints[Qt::blue])}, // QPoint(769,540), wrong
  {Qt::red, tx.map(toPoints[Qt::red])},   //  QPoint(478,674), wrong
  {Qt::green, tx.map(toPoints[Qt::green]) // QPoint(349,384), wrong
};
Or, using the mapping you suggested:
QMap<Qt::GlobalColor, QPoint> destinationPoints { // // should map to 
resultPoints
  {Qt::yellow, trueMatrix.map(tx.inverted().map(colorPoints[Qt::yellow]))}, // 
QPoint(2234,1452), wrong
  {Qt::blue, trueMatrix.map(tx.inverted().map(colorPoints[Qt::blue]))}, // 
QPoint(1954,1050), wrong
  {Qt::red, trueMatrix.map(tx.inverted().map(colorPoints[Qt::red]))},   //  
QPoint(2362,764), wrong
  {Qt::green, trueMatrix.map(tx.inverted().map(colorPoints[Qt::green]))) // 
QPoint(2639,1172), wrong
};

I tried other variations, I never got anything remotely correct.

I'm attaching a sample images. 
The dots are at these locations in the fromImage.jpg: 
"red": [714.667, 13.3333],
"green": [992, 421.333],
"blue": [306.667, 298.667],
"yellow": [586.667, 701.333]

It produces a transform of: QTransform(type=TxProject, 11=-1.53896 12=0.27276 
13=-2.5836e-06 21=-0.299256 22=-1.5449 23=-2.44385e-05 31=1652.74 32=923.468 
33=1.01866)

They should wind up as points in toPoints, and resemble toImage.jpg (created 
manually in GIMP)

This "just works" in openCv, but I cannot get it to work in Qt.

_______________________________________________
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest

Reply via email to