# How to determine which cells in a grid intersect with a given triangle?

I'm currently writing a 2D AI simulation, but I'm not completely certain how to check whether the position of an agent is within the field of view of another.

Currently, my world partitioning is simple cell-space partitioning (a grid). I want to use a triangle to represent the field of view, but how can I calculate the cells that intersect with the triangle?

The red areas are the cells I want to calculate, by checking whether the triangle intersects those cells.

EDIT:

Just to add to the confusion (or perhaps even make it easier). Each cell has a min and max vector where the min is the bottom left corner and the max is the top right corner.

Replay

Calculate the three corners of your fov triangle, rotate them to be facing the correct way and such, and then do one of:

1) do a point-in-triangle test for all the potential targets

2) calculate the bounding box of the this triangle, and do a point-in-triangle test for all potential targets in cells in this bounding box - this will be very straightforward code to debug

A similar approach is to use a quadtree rather than a grid and do the intersections on that. If the O(1) tile access is speeding you up, then just testing all the cells in the bounds of the fov triangle for in-triangle ought to be so fast as to be instant. As you're looking at other options, I presume its not and that the O(1) is actually taking a massive cache miss cost as you thrash your cache. You could of course perhaps look at prefetch instructions to annotate your bounding box walk with...

3) 'rasterise' this triangle and check the cells that it 'paints' - likely the most efficient code, but perhaps only marginally as I'd speculate its all dominated by cache miss times and depends on how complex your cells are and how occupied they are.

An alternative approach is to render the FOV to an off-screen bitmap and then read the pixel value for each of your objects; you can't 'unmix paint' but with a limited number of objects and by carefully choosing your paint you can deduce who saw who. This approach is similar to how many games work out what the user clicked on - they draw the scene off-screen using solid colours for the hit-areas. GPUs are very fast at triangle fill...

The canonical solution in software renderers (which must do this exact algorithm every time they rasterize a triangle) is, I believe, to scan the triangle one row of pixels at a time. The left and right edges of each row and calculated by walking down the sides of the triangle using Bresenham, and then you fill in the row between them.

I'm glossing over lots of details but that's the basic idea. If you hunt around for "software render" and "triangle rasterization", you'll probably find some more details. This is a well-solved problem. Your graphics card is doing this millions of times a frame.

If you want a more roguelike-specific solution, here's how I implemented FOV in mine. It seems to work pretty quickly. It's essentially a simple shadow caster, working on octant at a time, scanning outwards from the player.

I'm using a variation of the scanline algorithm to solve the exact same problem. I started by sorting the three triangle points by their height. I then basically check whether two edges are on the left or on the right side. For the side with two edges, you have to mark they row where you change which edge delimits your rows. For the side with one edge, you can just always use that.

So then for each row, I know which two edges delimit the it, and I can compute upper and lower bounds in x-direction. It sounds quite complicated, but it condenses down to just a few lines of code. Make sure you handle the special-case where one edge is completely horizontal!

How about maintaining a range of columns for each row that are in the triangle? What you can do is set the min and max column for each row where each point is, and where each triangle line crosses a horizontal row separator line.

``````public class Point
{
public float X;
public float Y;
public Point(float x, float y) { this.X = x; this.Y = y; }
}

public class Line
{
float ROW_SIZE = 100f;
float COL_SIZE = 100f;

public Point P1, P2; // P1 has the lowest Y
public float Slope, Intercept; // set in constructor
public bool IsVertical;

public Line(Point p1, Point p2)
{
if (p1.Y > p2.Y) { P1 = p2; P2 = p1; } // p1 has lowest Y
else { P1 = p1; P2 = p2; }
IsVertical = (p1.X == p2.X);
if (!IsVertical) { Slope = (p2.Y - p1.Y) / (p2.X - p1.X); Intercept = p1.Y - Slope * p1.X; }
}

public void ExpandRanges(int[] minCol, int[] maxCol)
{
// start out at row, col where P1 is, which has lowest Y
int row = (int)(P1.Y / ROW_SIZE);
int col = (int)(P1.X / COL_SIZE);
int lastRow = (int)(P2.Y / ROW_SIZE);
int lastCol = (int)(P2.X / COL_SIZE);

// expand row to include P1
minCol[row] = Math.Min(col, minCol[row]); maxCol[row] = Math.Max(col, maxCol[row]);

// now we find where our line intercepts each horizontal line up to P2
float currY = P1.Y;
float currX = P1.X;
while (row < lastRow)
{
row = row + 1;
float rowY = row * ROW_SIZE;
float diffY = rowY - currY;
float diffX = IsVertical ? 0f : diffY / Slope;
currY = currY + diffY;
currX = currX + diffX;
col = (int)(currX / COL_SIZE);

// expand rows above and below dividing line to include point
minCol[row - 1] = Math.Min(col, minCol[row - 1]);
maxCol[row - 1] = Math.Max(col, maxCol[row - 1]);
minCol[row] = Math.Min(col, minCol[row]);
maxCol[row] = Math.Max(col, maxCol[row]);
}

// expand last row to include P2
minCol[lastRow] = Math.Min(lastCol, minCol[lastRow]);
maxCol[lastRow] = Math.Max(lastCol, maxCol[lastRow]);
}

public static void Test()
{
Point p1 = new Point(160, 250);
Point p2 = new Point(340, 250);
Point p3 = new Point(250, 40);
Line l1 = new Line(p1, p2);
Line l2 = new Line(p2, p3);
Line l3 = new Line(p3, p1);

Line[] lines = { l1, l2, l3 };

int rowCount = 4;
int[] minCol = new int[rowCount];
int[] maxCol = new int[rowCount];
for (int i = 0; i < rowCount; i++)
{
minCol[i] = int.MaxValue;
maxCol[i] = int.MinValue;
}

for (int i = 0; i < lines.Length; i++)
lines[i].ExpandRanges(minCol, maxCol);

for (int i = 0; i < rowCount; i++)
Console.WriteLine("Row {0}:  {1} - {2}", i, minCol[i], maxCol[i]);
}
}
```
```

Output:

``````Row 0:  2 - 2
Row 1:  1 - 3
Row 2:  1 - 3
Row 3:  2147483647 - -2147483648
```
```

There's some algorithm work done in the roguelike community that deals with FOV/LOS/lighting in a tile-based world.

Perhaps you can find something on this page that can be used to help solve your problem: http://roguebasin.roguelikedevelopment.org/index.php?title=Field_of_Vision

Specifically "Permissive FOV" might be the most applicable to your situation.

Category: collision detection Time: 2011-01-24 Views: 1