Dealing with meshes the term "normal" means a "normal vector" perpendicular to some planar surface.

Imagine a cube. Now I place equal (height at 1) and regular cones with their bases on each face of the cube. An arrow from a cone's base center to its tip is perpendicular to the cube's face it is placed on. So you may call it a face-normal-vector.

Let's say the height of the cone means something. Let's say it defines some force pulling in the direction of the arrow. Now if there's a cone("arrow", "normal", "force") on each of the six sides of the cube nothing will happen for there are three pairs of vectors pulling with the same force in contrary directions. Removing one cone representing a force will give a different action: There are still two pairs of vectors pulling into contrary directions so their result will ne none but the one without an opposite one will pull the cube in its direction.

In maths this situation can be described this way (as the difference of base and tips 3D coordinates):

v1 = ( 0, 0, 1) (eg.: as a result of: tip at (178, 17, 78) - base center at (178, 17, 77)

v2 = ( 1, 0, 0)

v3 = (-1, 0, 0) - (contrary to v2

v4 = ( 0, 1, 0)

+ v5 = ( 0,-1, 0) - (contrary to v4

-----------------

= vS = ( 0, 0, 1) - (as the final result

Faces' normals in 3D meshes aren't used to apply something like a "force". Their length isn't important. But we are interested in it to get the directions. For example it is used for shading, to define the faces' front- and backside and other things as finding a perpendicular extrusion direction.

As there are tons of different normals (one for each triangle) and we are interested in the direction only one can divide the sum of all vectors by the number of all vectors which is an average.

In your case such an averaged normal points along the "main direction" of your mesh.

Instead of calling all faces' normals and averaging them in your code: A pivot dropped on a face group center (you can create it calling this tool via mmApi) gives that "main direction".

Well you could find open boundaries and their bounding boxes one by one, store these boxes in a list and finally find the biggest one. Maybe this works... But when it comes to aligning this biggest box you'll be at the same issue: You'll need to find a direction. Now you could select faces at the boundary, get it's local frame, use it to get the orientation... Long way

Although mmAPI is great, remember: Actions will be slower the more data you exchange between API and MM. So if there's a way to extract needed data from a built-in tool's result use it instead of running several queries.