Description
This project is intended as an exploration of various search algorithms, both in the traditional application of path planning, and more abstractly in the construction and design of complex objects.

Environments and Algorithms
Generating Environments: In order to properly compare pathing algorithms, they need to be run multiple times over a variety of environments. A map will be a square grid of cells / locations, where each cell is either empty or occupied. An agent wishes to travel from the upper left corner to the lower right corner, along the shortest path possible. The agent can only move from empty cells to neighboring empty cells in the up/down direction, or left/right – each cell has potentially four neighbors.
Figure 1: Successful and Unsuccessful Maze Environments.
Maps may be generated in the following way: for a given dimension dim construct a dim x dim array; given a probability p of a cell being occupied (0 < p < 1), read through each cell in the array and determine at random if it should be lled or empty. When lling cells, exclude the upper left and lower right corners (the start and goal, respectively). It is convenient to de ne a function to generate these maps for a given dim and p.
Figure 2: Maps generated with p = 0:1; 0:3; 0:5 respectively.
Path Planning: Once you have the ability to generate maps with speci ed parameters, implement the ability to search for a path from corner to corner, using each of the following algorithms:
1
Computer Science Department – Rutgers University Spring 2020
DepthFirst Search BreadthFirst Search
A : where the heuristic is to estimate the distance remaining via the Euclidean Distance
p
d((x_{1}; y_{1}); (x_{2}; y_{2})) = (x_{1} x_{2})^{2} + (y_{1} y_{2})^{2}: (1)
A : where the heuristic is to estimate the distance remaining via the Manhattan Distance

d((x_{1}; y_{1}); (x_{2}; y_{2})) = jx_{1} x_{2}j + jy_{1} y_{2}j:
(2)
BiDirectional BreadthFirst Search
For any speci ed map, applying one of these search algorithms should either return failure, or a path from start to goal in terms of a list of cells taken. (It may be bene cial for some of these questions to return additional information about how the algorithm ran as well.)

Analysis and Comparison
Having coded ve pathgenerating algorithms, we want to analyze and compare their performance. This is important not only for theoretical reasons, but also to check to make sure that your algorithms are behaving as they should.
Find a map size (dim) that is large enough to produce maps that require some work to solve, but small enough that you can run each algorithm multiple times for a range of possible p values. How did you pick a dim?
For p 0:2, generate a solvable map, and show the paths returned for each algorithm. Do the results make sense? ASCII printouts are ne, but good visualizations are a bonus.
Given dim, how does mazesolvability depend on p? For a range of p values, estimate the probability that a maze will be solvable by generating multiple mazes and checking them for solvability. What is the best algorithm to use here? Plot density vs solvability, and try to identify as accurately as you can the threshold p_{0} where for p < p_{0}, most mazes are solvable, but p > p_{0}, most mazes are not solvable.
For p in [0; p_{0}] as above, estimate the average or expected length of the shortest path from start to goal. You may discard unsolvable maps. Plot density vs expected shortest path length. What algorithm is most useful here?
Is one heuristic uniformly better than the other for running A ? How can they be compared? Plot the relevant data and justify your conclusions.
Do these algorithms behave as they should?
For DFS, can you improve the performance of the algorithm by choosing what order to load the neighboring rooms into the fringe? What neighbors are ‘worth’ looking at before others? Be thorough and justify yourself.
On the same map, are there ever nodes that BDDFS expands that A doesn’t? Why or why not? Give an example, and justify.
Bonus: How does the threshold probability p_{0} depend on dim? Be as precise as you can.
2
Computer Science Department – Rutgers University Spring 2020

Generating Hard Mazes
So far we have looked only at randomly generated mazes, and looked at the average behavior of algorithms over these mazes. But what would a ‘hard’ maze look like? Three possible ways you might quantify hard are: a) how long the shortest path is, b) the total number of nodes expanded during solving, and c) the maximum size of the fringe at any point in solving.
One potential approach to generating hard mazes would be the following: for a given solving algorithm, generate random mazes and solve them, keeping track of the ‘hardest’ maze generated so far. This would, over time, result in progressively harder mazes. However, this does not learn from past results – having discovered a particularly di cult maze, it has no mechanism for using that to discover new, harder mazes. Each round starts from scratch.
One way to augment this approach would be a random walk. Generate a maze, and solve it to determine how ‘hard’ it is. Then at random, add or remove an obstruction somewhere on the current maze, and solve this new con guration. If the result is harder to solve, keep this new con guration and delete the old one. Repeat this process. This has some improvements over repeatedly generating random mazes as above, but it can be improved upon still. For this part of a project, you must design a local search algorithms (other than the directed random walk described here) and implement it to try to discover hard to solve mazes. Mazes that admit no solution may be discarded, we are only interested in solvable mazes.
What local search algorithm did you pick, and why? How are you representing the maze/environment to be able to utilize this search algorithm? What design choices did you have to make to make to apply this search algorithm to this problem?
Unlike the problem of solving the maze, for which the ‘goal’ is wellde ned, it is di cult to know if you have constructed the ‘hardest’ maze. What kind of termination conditions can you apply here to generate hard if not the hardest maze? What kind of shortcomings or advantages do you anticipate from your approach?
Try to nd the hardest mazes for the following algorithms using the paired metric: { DFS with Maximal Fringe Size
{ A Manhattan with Maximal Nodes Expanded Do your results agree with your intuition?

What If The Maze Were On Fire?
All solution strategies discussed so far are in some sense ‘static’. The solver has the map of the maze, spends some computational cycles determining the best path to take, and then that path can be implemented, for instance by a robot actually traveling through the maze. But what if the maze were changing as you traveled through it? You might be able to solve the ‘original’ maze, but as you start to actually follow the solution path, the maze may change and that solution may no longer be valid.
Consider the following model of the maze being on re: any cell in the maze is either ‘open’, ‘blocked’, or ‘on re’. Starting out, a randomly selected open cell is ‘on re’. You can move between open cells or choose to stay in place, once per time step. You cannot move into cells that are on re, and if your cell catches on re you die. But each timestep, the re may spread, according to the following rules: For some ‘ ammability rate’ 0 q 1
If a free cell has no burning neighbors, it will still be free in the next time step.
3
Computer Science Department – Rutgers University Spring 2020
If a cell is on re, it will still be on re in the next time step.
If a free cell has k burning neighbors, it will be on re in the next time step with probability 1 (1 q)^{k}.
Note, for q = 0, the re is e ectively frozen in place, and for q = 1, the re spreads quite rapidly. Additionally, blocked cells do not burn, and may serve as a barrier between you and the re.
How can you solve the problem (to move from upper left to lower right, as before) in this situation?
Consider the following base line strategies:
Strategy 1) At the start of the maze, wherever the re is, solve for the shortest path from upper left to lower right, and follow it until you exit the maze or you burn. This strategy does not modify its initial path as the re changes.
Strategy 2) At every time step, recompute the shortest path from your current position to the goal position, based on the current state of the maze and the re. Follow this new path one time step, then recompute. This strategy constantly readjusts its plan based on the evolution of the re. If the agent gets trapped with no path to the goal, it dies.
Generate a number of mazes at the dimension dim and density p_{0} as in Section 2. Be sure to generate a new maze and a new starting location for the re each time. Please discard any maze where there is no path from the initial position of the agent to the initial position of the re – for these mazes, the re will never catch the agent and the agent is not in any danger. For each strategy, plot a graph of ‘average successes vs ammability q’. Note, for each test value of q, you will need to generate multiple mazes to collect data. Does recomputing your path like this have any bene t, ultimately?
Come up with your own strategy to solve this problem, and try to beat both the above strategies. How can you formulate the problem in an approachable way? How can you apply the algorithms discussed? Note, Strategy 1 does not account for the changing state of the re, but Strategy 2 does. But Strategy 2 does not account for how the re is going to look in the future. How could you include that?
A full credit solution must take into account not only the current state of the re but potential future states of the re, and compare the strategy to Strategy 1 and Strategy 2 on a similar average successes vs ammability graph.
4