This application makes beam optimization easy, by interfacing itself with a FE software and changing beams’ cross-sections according to their utilization factor, until a certain goal is reached.
Three types cross section of optimization are possible, according to how cross-sections are updated at each iteration due to their utilization factor:
- Each beam takes the most suited cross-section
- Beams are grouped and the most utilized one is the driver to choose the fittest cross-section that will then be applied to all the others as well. Cross-section lists can be different for each group.
- Beams are grouped, but each beam is optimized on its own, using the cross-sections available to that group.
This four day workshop was held in Salerno in September 2013 with Daniel Piker and Gennaro Senatore as tutors. The goal was to design and build a reciprocal frame system from a freeform surface, helped by the new kangaroo tools Daniel was developing. You can have an idea of what a reciprocal frame system is reading my article on Nexorade, bear in mind that traditionally these structure were conceived to work only with compression shear forces, but negative curved surfaces inevitably bring to traction shear forces, that’s why it was necessary to use plastic strings to keep the assembly together.
You can download the Grasshopper definition and the output file here.
The two tutors introduced themselves and we did the same. The team was composed by people who already knew Grasshopper but for others it was the first time. The tutors introduced the software and what probably is its most renowned plugin Kangaroo, which we would have extensively used.
Daniel explained how to use Kangaroo to form find membranes in in pure tension and asked to produce a nice surface compliant with the space where the pavilion would have been put. That surface would then have been used as a base for the reciprocal system.
We started from a triangular mesh, since that’s the arrangement we wanted for the reciprocal system. The boundaries where fixed, whilst the inner ones where used as springs with rest length equal to the initial one. A uniform upward load was applied, so that the actual structure would have resisted vertical (gravity) loads. To produce a good surface it’s helpful to use a clean mesh possibly close to the final shape, in fact during relaxation each spring produces a force proportional with its deformation. The best would be to iteratively relax a mesh and then use it as starting mesh, until all the members elongates to the same amount.
The mesh was then refined using Picker’s remeshing tool which takes advantage of Plankton which brings the Half-Edge data structure into Grasshopper. We ended up having a neat triangular mesh, ready to be transformed into a reciprocal system!
The reciprocal system was found using Kangaroo’s component, checking that the resulting structure would stand without the members sliding one over each other. The processed followed until now bought to a structure which worked in pure axial force and shear, with no additional bending. That being done, before fabrication we had to number each stick and to know which and where other sticks crossed the former. This was done creating a grasshopper definition.
We built it! We started by numbering each plastic tube (they were all the same) writing in the middle its number and on its edges a line showing where the other two would connect and what were their number. The sticks were fixed using rubbers so they won’t have slid and plastic strings.
This is the paper I presented at the International Association for Shell and spatial Structures 2013. Also you can find the full paper here.
“Reciprocal system” is an expression that designates a specific family of structures which has its roots in very old traditional construction systems in various regions of the world. Some examples can be found in traditional 12th century roofing structures in Japan as well as in manuscripts from the French architect Villard de Honnecourt or the Italian scholar Leonardo da Vinci. This particular structural system can be seen as a construction technique to cover large spaces with short members assembled together with very simple connections: traditionally, for stone architecture, connections are considered as compressed and unidirectional joints but, in many recent applications, connections are made of spherical joints.
The reciprocal character of the system comes from the fact that locally every member is supported by and supports its neighbours in a cyclic way as illustrated in the figure. To describe this kind of assembly, Baverel suggested that a reciprocal system, also called a ”nexorade”, can be seen as a traditional structure in which members converging at a node would have been engaged within this node and thus proposed to call ”engagement length” the length of the member which has entered the connection and ”engagement window” the opening formed by the engagement of the members within the node. They then defined some other geometrical properties to characterise end dispositions but it is not necessary to recall them here as the only configuration that will be investigated in this paper is that of the figure.
Considering structural behaviour, research has been mainly dedicated to the investigation of some particular configurations or some particular structures. In these studies, the authors observed many interesting phenomena and relationships which are quite characteristic of the behaviour of reciprocal systems. For instance, about the way the transverse load is ”circulating” from the interior of the surface to the supports, Gelez pointed out that global bending of the “grid” generates additional local bending and large shear forces and that, more surprisingly for a simply supported structure, the shear forces in the members were maximum at mid-span whereas it is classically acknowledged that shear forces are maximum close to the supports. Likewise Douthe and Baverel or Sanchez and Escrig underlined a strong link between the engagement length, inner bending moments and bending stiffness of the system.
However, from a design point of view, there are actually no practical recommendations or formulas giving the influence of the various parameters on, for example, the global deflection of a reciprocal system or inner forces in the members. So, as the generation of complete reciprocal system (especially its geometry) is time consuming, the possibilities for optimisation and the variety of construction systems are practically reduced.
It is possible to give partial answers to these questions by means of homogenization techniques. The basic idea of homogenization consists in “seeing from afar” the beam assembly which constitutes the N. Indeed, even if the assembly is made of a well-defined periodic pattern at local scale, when zooming out, the pattern blurs until only a surface is seen. In mathematical terms, this procedure consists in separating scales: macroscopically the engineer wants to see an equivalent shell or plate model and microscopically he does not want to miss the detail of the stress state. Scale separation was rigorously established mathematically. Based on this assumption in the linear elasticity framework, it is possible to derive a macroscopic model with an homogenized constitutive equation and also localization fields which enable the reconstruction of fields inside the unit cell which generates the whole structure.
Here, the N is made of few beams arranged in order to create a planar periodic pattern. The unit-cell which generates it, is constituted of only two beams (see red members in the above figure). It will thus be possible to derive closed-form solutions of the localized resultants and moments which will be induced by the macroscopic plate fields. The corresponding plate model, that of the macroscopic surface, will be a Kirchhoff-Love plate model whose equivalent bending stiffness will be derived mathematically and investigated.
It must be made clear that, if generally homogenization techniques are used to reduce the computational burden by separating scales, in the present case, it is obviously straightforward and accurate to compute directly the full structure, so that a gain is only expected in terms of parametric study: homogenization techniques will avoid the necessity for generating numerous models with different geometries. And, apart from this aspect, our intention is to reveal some interesting features about the intrinsic behaviour of these periodic structures and to infer their consequences on preliminary design.
This is the poster I presented at the AAG14, trying to sum up my Master’s Thesis project.
Modern architecture brought innovative ways to approach the field. New materials, integrated workflows, complex structures, energetics, structural optimization and new shapes, specifically free forms. In such landscape in constant evolution, computational and design tools must keep up the pace as long with designer’s in-depth knowledge of them. The process bringing to construction an architectural idea is long and requires a great amount of variations. Those may be due to the client, the architect, the industry, the engineer and anyone who’s working on it. Not only traditional design methods are running obsolete, but the possibilities arising from having the whole project fully parametrized are becoming more and more compelling. Having the whole project parametrized makes easy to explore all possible solutions and eventually find the best one, using scripts appropriately. This paper presents the workflow necessary to design a free form roof from scratches and to optimize it. In order to do so, several problems arose which would have been very hard to deal with, without the automatisms inherent to a parametric conception.
The first problem of course, is to draw such a complex surface and the relative structure. As said traditional tools, such as CAD modeling, are not suited enough. Furthermore, it is well known that projects change hundreds of times during their conception and even their realization: updating it with CAD and subsequently with all the softwares involved (for structural analysis, energetics, fabrication, costs, etc…) is unfeasible.
Those architectures are highly expensive, thus every effort should be conducted to reduce costs. This practice is commonly referred to as optimization, since the aim is optimizing the use of material or the fabrication (both related to costs) to have best results with least expenses. Certain assemblies are expensive beyond imagination or not even possible to realize. In this case, the geometry itself must be modeled to make feasible what otherwise could be unfeasible.
For what concerns fabricability, the project presents a lot of different nodes, this again should catch the engineer’s attention and make him struggle to find a way to fabricate them in series to drive costs down.
For what concerns the project as a whole, the interoperability between softwares is crucial. As aforementioned, in the workflow of free form architectures a lot of softwares find their way in (e.g for structural analysis, energetics, construction phases, cost analysis, drawing). Models must pass seamlessly from one software to the other, without loss of information and within short time. Expanding the concept, the best would be found if the entire project, with all the information it carries, could flow from the first step to the last one without any external input required. The only way do so is by having the whole thing written in a parametric fashion, so that when an input is changed, the whole project updates as long with its various outputs.
The project: between dream and reality
The Scuola Normale Superiore technical division asked to conceive and design a roofing for the Scuola‘s inner courtyard within a framework of accommodating the surrounding area. The courtyard is approximately 20x20m and the highest building is around 22m.
The building is a city landmark situated in the very city center, relevant both for its beauty and its being the pulsing heart of one of the most renowned university in Italy and in the world.
The client was very strict with its requirements. The new structure could not in any way relay on the ancient one and it should leave a certain gap between it and the surrounding roofs for natural air conditioning. Therefore, some kind of vertical structure should have been studied for the glass roofing to stand on, but that new structure should not prevent passers-by to enjoy the surrounding facades.
Several hypothesis have been studied from the architectural point of view and from the structural one. Free form architectures cannot be designed in sequential stages for a change in one aspect will most probably induce a change in one another. Those architectures must be designed in coordination with every professionals working on it and possibly their evolution must follow a loop (see figure on the left)
At last, the best solution was the one shown in the render. The shape resembles a Calla and the beam pattern is composed by intersecting logarithmic spirals. Here one can immediately spot a big difference with traditional gridshells used for roofing. Usually gridshells are found using a relaxation process, thus making the structure working mostly in pure compression. This case is different and was adopted because the client required the structure not to rely on the historical one. Because of this, beams work mainly in bending but by intertwining them the overall stiffness increases and resistance increase too, making the structure redundant.
As already said, traditionally free form roofing can be found by a relaxation procedure. Applying the same procedure in this case would result in a cone-like shape with approximately zero Gaussian curvature. A different approach could be finding the minimal surface joining some given curves, then superimposing those two solutions.
A minimal surface has the least area respecting a set of boundaries (i.e. curves). Minimal surfaces have several interesting properties such as having zero mean curvature everywhere. Moreover, if we think such surface as being composed of an elastic tensile wrap, in every point the sum of stresses in any direction must be equal to zero or the applied external load if there is any.
This property is used to discretize the surface with a mesh where each edge is a spring with zero rest length since we want to find the minimal area. This procedure is not free from errors, the most evident one is that this way principal directions are somehow set by the initial mesh. The minimal surface found has been tweaked by adding an inverse gravity load so that it would assume an inverse catenary shape.
The eam pattern is derived by intersecting logarithmic spirals projected onto the surface. But there it comes another problem. In fact, fabricability is the key issue for this kind of architecture, with node-torsion being one of its main enemies, therefore the beam pattern must be checked against this issue. It’s known that conical meshes have no node-torsion. Since the beam weaving is made of quads, it could be transformed in a mesh. Once the mesh was found, an optimization procedure was conducted to make it the most conical possible.
Structural parts of such complex free forms can hardly be dimensioned. In addition, given the high redundancy, an optimization procedure can produce interesting results and in structures with lots of beams a sharp decrease of costs. In this project structural optimization was approached in two ways:
- Loop optimization;
- Evolutionary multi-objective optimization (EMO).
The former is quite straightforward. A loop tries to optimize beam section’s thickness according to the ratio between load and resistance. The loop runs until a certain threshold is achieved, in this case there were two:
- The whole structure is verified;
- Displacement and deformations consistent with Eurocodes.
Even though this method not necessarily leads to the best solution possible, EMO does. EMO searches the whole solution domain therefore it is part of the so-called global optimization algorithms.
The loop optimization works as follows:
- Initially the algorithm analyzes the structure and calculates the ratio between carrying load and resistance (later on referred to as Utilization Factor, UF) for each beam.
- According to the UF the section thickness is updated using the following expression, plotted in the above figure , where a=1.2 is a parameter to speed up convergence and is the target Utilization Factor, less than 1.0 to guarantee a certain degree of safety. Basically, when a section is overstressed the algorithm increases its thickness and viceversa. Please note that this process is done using real (float) numbers even if thickness should be an integer, because float numbers produce a better numerical stability than integers.
- The structure is analyzed again using the updated thicknesses and UFs are recalculated. The loop ends when all $UF\leq0.9$ and stiffness requirements are met.
- Once convergence is reached, thicknesses were reduced using the following mapping: . This is because the above expression does not attempt to reduce overall mass, so this had to be done after. A few beams were no more verified and had to be updated manually. Then, thicknesses that were defined by real numbers had to be rounded up to the nearest integer, actually to the first integer included in a given set of industrial rectangular hollow sections
Free forms are not just structure but rather aesthetic. Those shapes need to be covered by panels. This procedure is known as panelization. Glass is possibly one of the most difficult material to use for panelization. There are concurrent necessities in this practice:
- Resemblance with initial shape (position and derivative continuity);
- Cost increases with panel curvature; \item Since each panel is different it requires a specific mold, which are really expensive.
The solution of this problem is not obvious and several papers have been written on that subject (see references at the bottom). Every architecture has its own story and requirements, thus requiring specific adaptation of general concepts such as those presented in the above mentioned papers. In this project panelization has followed two steps:
- Reduction of overall curvature;
- Reduction of the quantity of molds.
Even if shape is set, one can attempt to reduce average curvatures by slightly moving the surface control points, thus obtaining an optimized shape very close to the initial one. To do so, a principle of fictitious energy minimization was used:
where is counted for every node and is counted for every face, with being the points where the two diagonals pass. Weight coefficients and were used.
Minimization of the above expression can be done in several ways, in this project a physical spring particle system was used, Kangaroo a Grasshopper plug-in. The new surface was finally generated through those new points.
The reduction of the amount of molds is done as follows. First of all, for each panel the two main curvatures are calculated in the middle point. Then, the two lists of main curvature thus found are merged and divided in sub-lists, each one having the higher curvature value equal to 1.20 times the lowest one. To each sub-list the same mold curvature was assigned, equal to the average curvature in the set. At the end of this procedure, each panel was assigned to a group where every component had the same constant curvature. Then, panels having (i.e. with a curvature radius of 30m) where considered planar.
Because of the peculiar shape studied, the panelization was done using three types of mold:
- Cylindrical: single curved panels, zero Gaussian curvature;
- Toroidal: double anticlastic curved panels, negative Gaussian curvature;
In the following figures the types of molds used are shown and the number of panels they are used for.
All the drawings, data, analysis results, are assembled and automatically added to report and graphical visualization. This is crucial to have a clear view on the project phase and to share it with all the professionals working on it. Imagine if every variation, every optimization result should have been drawn by hand or all results data put manually into reports. It would take time for engineers and architects that otherwise would be spent for more productive tasks.
This paper showed the difficulties encountered when designing a free form architecture and the cutting-edge tools required to deal with it. It’s clear how traditional tools where intermediate steps are not linked to each other, cannot provide a sufficient help to the designer. Moreover without the full parametrization of the project it would be impossible to automatically optimize it by having a script screening thousands of different solutions. The procedure here described allowed to greatly reduce costs, especially for what concerns glass panels production, and it was made possible by the increasing use computer engineering is having in the architectural landscape.
- EIGENSATZ, M., DEUSS, M., SCHIFTNER, A., KILIAN, M., MITRA, N. J., POTTMANN, H., AND PAULY, M. 2010. Case studies in cost-optimized paneling of architectural freeform surfaces. In Advances in Architectural Geometry.
- EIGENSATZ, M., KILIAN, M., SCHIFTNER, A., MITRA, N., POTTMANN, H., AND PAULY, M. 2010. Paneling architectural freeform surfaces. ACM Trans. Graphics 29, 3. Proc. SIGGRAPH, to appear
- More videos
Dodo is a plugin for Grasshopper where I’m collecting scientific tools useful for computational designers. You can find the plugin comprehensive of some examples here and the full manual here. If you have any questions, ideas to implement or insults, do not hesitate to contact me!
An isosurface is the collection of points in space mapping to the same value. In a 2D domain the isosurface is an isoline and an example of them can be found in topographical maps where mountains and depression are represented as closed curves each one indicating a different height.
Isosurfaces are the equivalent of isocurves but are the result of a 3D domain where each point in space has a certain value. They connect all these points together thus forming one or multiple surfaces.
One of the main ways to produce these isosurfaces is using Marching Cubes or the theories that generated from this one. In this plugin Marching Tetrahedra have been used since they can manage better singular points.
Among the new implementations there are some making use of derivatives of the scalar field given, but unfortunately the grasshopper Field component does not calculate derivatives. Maybe in the future I will implement a new customized field component but for the time being I thought that using the already available tools is best.
This is an Isosurface produced by superposing the field generated by a cylinder and one generated by random points in space.
Resources: Marching Tetrahedra of Paul Bourke.
Non-linear optimization differs from linear optimization for the function it tries to minimize and/or the constraints it is subject to are non linear, which is the case for almost several engineering application. NL-opt makes use of gradient free algorithms to achieve this and in order to do that it needs to calculate an approximate value of the gradient for a given point. Once this is done, it tries to move to a neighbour point according to the interpretation of the gradient given by the specific engine criteria.
This plugin uses the .NET implementations of the famous NLOpt library that you can find here, whilst the .NET implementation is here.
Due to the difficult interoperability of the dll, some engines are missings and non linear inequality constraints are harder to fulfill.
Artificial Neural Network
Artificial Neural Networks can date back to the second half of 1900 and take their name by their similarity with neuron interconnections in brain. ANN are composed by a series of layer each containing a number of neurons, each of which connects to its peers in the layer before and after, as shown in the following picture. The example in the picture shows an ANN mapping data from a 3D domain to a 2D on by making use of a hidden layer. Hidden layers are those which are not directly connected with the input data nor with the output. An ANN can have any number of neurons and hidden layer which can bring to completely different results as well as increased computational time. Neurons’ connections are drawn as arrow pointing in the direction in which the data flows. Usually neurons perform very basic operations and they are connected through value functions which are the targets ANNs optimize in order to fit the input data to the expected results.
The field of application of these ANN is data fitting, prediction and classification and their implementation is treated below.
These mentioned are not the only applications though as ANN can be used with unsupervised learning as Self-Organizing Map (SOM)
The library on which Dodo relies for Neural Networks can be found here.
ANN – Supervised Training
This type of learning algorithms use sample inputs matched desired output values during the learning phase. The goal of this method is to shape to ANN so to provide a close fit output when given an input.
Delta Rule learning is one of the two threshold finders in Dodo’s ANN along with Perceptrons Rule.
In the example below the 8 vertices of a cube are assigned to two groups: group 0 above and group 1 below. The trained ANN finds the decision boundary depicted as a grey plane which splits the decision space in two. In the picture on the right a grid of 11x11x11 is sampled and only the points having a predicted class of 1 are shown which demonstrates they are respectful of the boundary surface.
In this example the NN is fed with a number of points in the 2D space, being part of three distinct groups which leads to a 3D solution space identified. To represent the group to which they are part of, a 3D vector can be used having for each dimension either 0 or 1. Moreover, in order to have a better graphical understanding, the vectors are multiplied by 255 so to transform them into colors.
Supervised training can be used to make an ANN be able to predict values of a series. In the example shown below 8 couples of number were used and plotted in teal. In red it is shown the approximating curve generated by the ANN trained using back-propagation learning. The two curves neatly superpose on the first part and diverge a bit on ending, but this behaviour can change widely playing with the learning coefficients.
For prediction the procedure is the same as before but we feed the ANN with one single series of number but making it use sub-series of 5 numbers to predict the 6th and use the difference between the latter and the expected value to rate the learning. In the example below 16 values from a cosine have been used and in the picture underneath one can see how well the ANN predicts the following 34 values without progressive increase of the error.
ANN – Unsupervised Training
The ANN is given sample inputs without expected results and it will organize itself in order to find patterns in this series. Dodo implements two algorithms for this category: SOM and Elastic Network.
Unsupervised training and elastic network can be used to find a solution to the travelling salesman problem. The elastic network is initially brought to the center of the data set, then slowly tries to replicate the values (position) basically working as a dumped spring system. The resulting weights are the Euclidean distances between the data points and the neurons’ output can be used how well the NN fit to the data samples. The neurons’ output can then be used to see how similar another dataset is to the first one and it is my understanding that this patter recognition strategies have been successfully used to recognize cancer cells from pictures.
Self Organizing Map