RAMSES

What this page is (not):

This page gives an overview of how to run simulations with the Copenhagen version of RAMSES. However, it does not explain the underlying code structure. It is partly based on the main user guide by Romain Teyssier. If you want to learn more about the concept behind RAMSES, please refer to the articles by Teyssier or Fromang et al. 

 

Useful links:

Link to the Copenhagen version of the code: https://bitbucket.org/thaugboelle/ramses/wiki/Maintain

Link to the standard version of the code: https://bitbucket.org/rteyssie/ramses/

Link to the standard user guide: http://irfu.cea.fr/Projets/COAST/ramses_ug.pdf

Link to the first paper in 2002 by Romain Teyssier about RAMSES: http://adsabs.harvard.edu/abs/2002A%26A...385..337T

Link to the article by Fromang et al. (2006) for the MHD version: http://adsabs.harvard.edu/abs/2006A%26A...457..371F

 

Let's get started!

Where do I obtain the code?

You find the code under this link: https://bitbucket.org/thaugboelle/ramses/wiki/Maintain.

After you did

$ git clone git@bitbucket.org:thaugboelle/ramses.git 

you will find then find a folder called "ramses" on your machine. If you go to that folder and do

$ ls

you find the following list of files: "amr/", "bin/", "doc/", "hydro/", "mhd/", "namelist/", "patch/", "pm/", "poisson/", "utils/"

There are many modifications in the code compared to the standard version, but most changes are done in "patch/mhd/troels".

 

How to compile the code?

Before running, you have to compile the code. Go to the directory "PATHTORAMSES/bin". There, you find a file called "options.mkf.org". In this file you can set several parameters; e.g: how many dimensions you consider, which solver you want to use (MHD/hydro), which compiler you want to use, whether you wan to use OpenMP and/or MPI, whether you include KROME, etc. Some parameters are self-explanatory and most of them are described in the file. Copy the template file

$ cp options.mkf.org options.mkf

and then modify options.mkf to your needs. Now you're ready to compile the code simply by typing

$ make -j

 

How to run a simulation?

If the code compiled without any errors, you can find an executable file called "ramses(1/2/3)d(mhd/hydro)" depending on the number of dimensions and the solver you use. Then you can go to the directory, from where you run the code and copy the executable to the folder. Here, you create/insert your input file (default name is "input.nml"). You find a detailed description of the options for the input files in the "Tell me! What are the runtime parameters?" section. If you already have the appropriate input file, then you simply execute the executable together with your input file and save the standard output to a log file, i.e.:

$ ./ramses(1/2/3)d(mhd/hydro) input.nml > log

 If you want to monitor the standard output on your screen and store it in a file, please type

$ ./ramses(1/2/3)d(mhd/hydro) input.nml | tee log

If you want to run your code on a cluster, you probably want to execute these commands as part of a batch script. Please go to "Running batch jobs" for a detailed description and an example file of how to do this. For larger runs, it is generally a good idea to use the "jobcontrol.csh" script, which can be found in "$RAMSESPATH/utils/scripts/". The script allows you to add some *.flag files that make life easier for you during your run. How to make use of the *.flag files is explained in the section "How to monitor the runs?".  

How to define the input file?

RAMSES offers a series of runtime parameters, which consists of several blocks of options.

A namelist consists of several blocks and in principle looks like this:

&BLOCK_NAME1
option1=999
option2=t
/
BLOCK_NAME2
option3=42
...
/

On the next page, you find an overview about the parameter options.


 

Tell me! What are the runtime parameters?

In the following, you find a list of all the options for each block similar to the standard user guide. Many of them are already described in the standard user guide and here you find a list of the ones, which are used frequently in our simulations.

 

&RUN_PARAMS

Variable name, syntax
and default value               
Fortran type  Description   
poisson=.false. Logical Activate Poisson solver.
hydro=.false. Logical Activate hydrodynamics or MHD solver.
verbose=.false Logical Activate verbose mode
nrestart=0 Integer Output file number from which the code loads backup data and resumes the simulation. The default value, zero, is for a fresh start from the beginning. Setting nrestart=-1, RAMSES automatically chooses the last available output file. Restarting with the same amount or higher number of processes is straight-forward. Reducing the number of processors
nstepmax=1000000 Integer Maximum number of coarse time steps.
ncontrol=1 Integer Frequency of screen output for Control Lines (to standard output) into the Log File).
nremap=0 Integer Frequency of calls, in units of coarse time steps, for the load balancing routine, for MPI runs only, the default value, zero, meaning “never”.
nsubcycle=2,2,2,2,2, Integer array Number of fine level sub-cycling steps within one coarse level time step. Each value corresponds to a given level of refinement, starting from the coarse grid defined by levelmin, up to the finest level defined by levelmax. For example, nsubcycle(1)=1 means that levelmin and levelmin+1 are synchronized. To enforce single time stepping for the whole AMR hierarchy, you need to set nsubcycle=1,1,1,1,1,
overload=4 Integer Number of subdomains
swap_domains=t Logical Improve load balancing
pic=t Logical Activate Particle-In-Cell solver
tracer=t Logical Activate tracer particles

 

 &STARS

Variable name, syntax
and default value 
Fortran type Description
do_sink=t Logical

Activate sinks; setting this to 'f' ignores the rest

of the block

do_dumps=f Logical Produces an extra dump, when a new sink is born
sink_file='hms.dat' Character

The data of the old high-mass star data from STAGGER

(only important for the 40 pc simulation)

maxstars=n Long  max number of stars / sinks (rarely used)
rho_limit=16e-5 Long  density above which sinks form (=0 => automatic choice)
rho_limit_fraction=1. Real  Scaling constant for the automatic choice
max_distance=8. Real Exclusion radius in terms of cells around the sink
rho_fraction=0.1 Real Minimum fraction in density between the
density that triggers creation of a sink particle and the individual cell density close by, which
has to be exceeded by the cell considered for accretion
acc_rate=1e-3 Real Fraction of the cell mass accreted per unit time, where time is measured in units of the cell size divided by the
local Kepler speed
acc_fraction=0.25 Real maximum fraction of max_distance to the sink particle in order to
be considered for accretion
twrite=1e10 Long Force to write stars to stars.dat file with certain delta t
verbose=2 Integer Different levels of verbose output
strict_checking=t Logical Check consistency
do_translate=t Logical Follow a sink
center_star=71 Integer Select sink to zoom on
do_halt=t Logical halt center star
do_sne=t Logical Allow supernova explosions by high-mass stars
do_refine=t Logical Refine to the highest level on cells with sinks
radius_SN=3 Integer Ancient parameter; used to be 8, but is not relevant any more
tabledir='$RAMSESPATH/patch/mhd/troels/supernovae' Character Select table for supernova explosions

 

&AMR_PARAMS

Variable name, syntax
and default value 
Fortran type Description
a_verbose=1 Integer Different levels of verbose output
evenout=0 Integer  
levelmin=7 Integer Minimum level of refinement. This parameter
sets the size of the coarse (or base) grid by n_x=2^levelmin
levelmax=29 Integer Maximum level of refinement.
ngridmax=0 Integer Maximum number of grids (or octs) that can be allocated during the run within each MPI process.
ngridtot=0 Integer Maximum number of grids (or octs) that can be allocated during the run for all MPI processes. One has in this case ngridmax=ngridtot/ncpu.
npartmax=0 Integer Maximum number of tracer particles that can be allocated during the run within each MPI process.
nparttot=0 Integer Maximum number of tracer particles that can be allocated during the run for all MPI processes. In this case: npartmax=nparttot/ncpu.
nexpand=1 Integer Number of mesh expansions (mesh smoothing).
boxlen=1. Real Box size in user units

 

&REFINE_PARAMS

Variable name, syntax
and default value
Fortran type Description
err_grad_u=-10.0 Real Velocity relative variations above which a cell is refined.
err_grad_d=2.0 Real Density relative variations above which a cell is refined.
err_grad_p=1.9 Real Pressure relative variations above which a cell is refined.
err_grad_b=3.0 Real Magnetic relative variations above which a cell is refined.
levelmax_?=20 Integer Maximum level until which refinement by err_grad_? is allowed.
floor_?=1d1 Real Value below which refinement by err_grad_? is not allowed.
rho_sph=200. Real Density normalization of first m_refine
m_refine=1.024,0.512,... Real array Quasi-Lagrangian strategy: each level is refined if the mass in a cell exceeds m_refine(ilevel)*mass_sph.
r_refine=2,32e-2,... Real array Geometry-based strategy: radial size and shape of the refined region at each level.
a_refine=1.0,1.0,... Real array Geometry-based strategy: elliptical shape of the refined region at each level; major-axis.
b_refine=1.0,1.0,... Real array Geometry-based strategy: elliptical shape of the refined region at each level; minor-axis.
exp_refine=2.0,2.0 Real array Geometry-based strategy: exponential definition of the region
x_refine=0.5,0.2,... Real array Geometry-based strategy: center of the refined region at each level of the AMR grid.
y_refine=0.5,0.2,... Real array Geometry-based strategy: center of the refined region at each level of the AMR grid.
z_refine=0.5,0.2,... Real array Geometry-based strategy: center of the refined region at each level of the AMR grid.
jeans_refine=-1.,-1. Real array Jeans refinement strategy: each level is refined if the cell size exceeds the local Jeans length divided by jeans_refine(ilevel).
interpol_var=0 Integer Variables used to perform interpolation (prolongation) and averaging (restriction).interpol_var=0: conservatives (?, ?u, ?E) interpol_var=1: primitives (?, ?u, ? )
interpol_type=1 Integer Type of slope limiter used in the interpolation
scheme for newly refined cells or for buffer cells.
interpol_type=0: No interpolation,
interpol_type=1: MinMod limiter,
interpol_type=2: MonCen limiter,
interpol_type=3: Central slope (no limiter).

 

&HYDRO_PARAMS

Variable name, syntax
and default value
Fortran type Description
h_verbose=1 Integer To get the hydro_flags in the output file
gamma=1.666667 Real Adiabatic index
cbmax=100. Integer  
courant_factor=0.7 Real Courant factor for time step control
slope_type=3.5 Real Type of slope limiter used in the Godunov
scheme for the piecewise linear reconstruction:
scheme='muscl' Character  
riemann='hlld' Character Name of the desired Riemann solver. Possible choices are ’exact’, ’acoustic’, ’llf’, ’hll’ or ’hllc’ for the hydro solver and ’llf’, ’hll’, ’roe’, ’hlld’, ’upwind’ and ’hydro’ for the MHD solver.
riemann2d='hlld' Character Name of the desired 2D Riemann solver for the induction equation (MHD only). Possible choices are ’upwind’, ’llf’, ’roe’, ’hll’, and ’hlld’.
pressure_fix=f Logical Activate hybrid scheme (conservative or primitive) for high-Mach flows. Useful to prevent negative temperatures.
do_isothermal=f Logical Calculate isothermal equation of state
temp_iso=1000. Real dunno - T = p_ini / rho_ini*1/(gamma-1)
t_metaldecay(?)=0.75e6 Real half-life of the corresponding metal
metal_table(?)='al_LC06.tbl' Character Table of the corresponding metal (short-lived radionucleide)

 

&FORCE

Variable name, syntax
and default value
Fortran type Description
do_force=f Logical Force drive the gas
iseed=-77 Integer  Seed for pseudo-random numbers
k1=1.  Real  Smallest drive wavenumber (=1 for whole box)
k2=2.  Real  Largest drive wavenumber (=2 for half box)
pk=0.  Real  Wave number power exponent; power ~ kpk
ampl_turb=20.  Real  Turbulent amplitude (~ equal final Mach number)
t_turn=0.025  Real  Turn-over time (dynamic time): ~ 0.5/Mach
t_turb=-0.0125  Real  Start-up time => initial amplitude (=0 for no initial velocity)

 

&PHYSICS_PARAMS

Variable name, syntax and default value

Fortran type Description
Grho=0. Real Activate gravity
Grho_time=0.1 Real Ramp up time for Grho (fade in gravity), should be several dynamical times t_dyn
units_density=1e-24 Real Density units
units_length=123.428e18 Real Length unit
units_time=6.67e15 Real Time unit
units_velocity=0.97781e5 Real Velocity unit
cooling=t Logical Activate the cooling and/or heating source term in the energy equation.

 

&COOL

Variable name, syntax
and default value
Fortran type Description
type=3 Integer Choose the cooling table
dt_type=2 Integer  Always use 2 = time sub-cycling
verbose=1 Integer  Verbosity
n_crit=500. Real  Critical number density above which cooling is tapered off
G0=0.0 Real  Cooling constant for the part proportional to N
G1=1e-25 Real  Cooling constant for the part proportional to N2
G2=1e-28 Real  Radiative heating [FIXME: the order of these could be different]

 

&INIT_PARAMS

Variable name, syntax
and default value
Fortran type Description
filetype='tracer' Character Select to initialize tracer particles. NOTE! Comment out, if you restart from a snapshot with tracer particles already defined. 
initfile='$RAMSESPATH/...' Character  
nregion=1 Integer Number of independent regions in the computational box used to set up initial flow variables. NOTE! Do not use it, when using initfile
?_region=100. Real Flow variables in each region (density, velocities and pressure). For point regions, these variables are used to defined extensive quantities (mass, velocity and specific pressure).
exp_region=2 Real array Exponent defining the norm used to compute distances for the generalized ellipsoid. exp_region=2 corresponds to a spheroid, exp_region=1 to a diamond shape, exp_region>=10 to a perfect square.

 

&OUTPUT_PARAMS

Variable name, syntax
and default value
Fortran type Description
delta_tout=3e-6 Real Time increment between outputs.
tend=1 Real Final time of the simulation
noutput=1 Real Number of specified output time. If tend is not used, at least one output time should be given, corresponding to the end of the simulation.
tout=0.0,0.5,... Real array Value of specified output time
foutput=100000 Integer Frequency of additional outputs in units of coarse time steps. foutput=1 means one output at each time step. Specified outputs (see above) will not be superceded by this parameter.
fscratch=100 Integer Like foutput, but the output number isn't incremented for these dumps, so the last dump is overwritten until the time of the next real output is reached. 
trace_level=0 Integer Trace MPI in virtual_boundaries.f90
datadir='i' Character Select directory, where to access and save output files
nmpi_dump=8 Integer RAMSES immediately makes an output as though it were run with nmpi_dump processes, then stops. 

 

&TRACER_PARAMS

Variable name, syntax
and default value
Fortran type Description
nregions=1 Integer Number of independent regions in the computational box used to set up initial flow variables.
x_center=0.5 Real Center of the x-coordinate of the region where to place tracer particles
y=center=0.5 Real Center of the y-coordinate of the region where to place tracer particles
z_center=0.5 Real Center of the z-coordinate of the region where to place tracer particles
r=0.01 Real Distance from the center to the boundary of the tracer region
mass_pp=1e-9 Real Mass per particle

 

&SN1A_PARAMS

Variable name, syntax
and default value
Fortran type Description
v_SN1A_KMS=20. Real  Velocity dispersion of SN type IA
H_SN1A_PC=250. Real  Vertical scale height of SN type 1A
F_SN1A_PERKPC2MYR=20. Real  Frequency per kpc2 and Myr
N_SN1A=0 Integer  Number of SN type IA
t_init_Myr=2. Real  Period of time with artificial SN

 

Recently, Troels H implemented tracer particles to RAMSES. Tracer particles work are especially useful to follow passive scalars like short-lived radionucleides (SLRs) since they reveal information about the history of the SLRs. However, tracer particles can also be used to obtain information about other the evolution of other physical quantities and it is certainly a powerful tool to follow the dynamics in the early protoplanetary disk. 

Before you start, please make sure that you compiled the latest version of the code. In principle that should now be done automatically as explained in the compiling section, but it does not harm to remind you again. If you are working on one of the clusters, please type:

source /software/astro/intel/parallel_studio_xe_2015_upd1/composer_xe_2015.1.133/bin/compilervars.sh intel64
source /software/astro/intel/parallel_studio_xe_2015_upd1/impi/5.0.2.044/bin64/mpivars.sh

 

1. To INITIALIZE tracer particles, the FILETYPE should be set to ‘tracer’ in &INIT_PARAMS (and you also need to set the RUN_PARAMS and AMR_PARAMS in item 2 below):

&INIT_PARAMS

…

filetype=‘tracer’

…

/

 

This puts one tracer particle per leaf cells, keeping with it only it’s ID number, position, velocity, and initial mass.

However, when DUMPING, the tracer particle data is “dressed” with all kinds of useful info about the hydro variables at the points where they are at that time.

 

2. To RUN with tracer particles, the PIC and TRACER logicals should be on in RUN_PARAMS, and the NPARTMAX parameter must be set in AMR_PARAMS:

&RUN_PARAMS

…

pic=t  tracer=t

…

/

&AMR_PARAMS

…

npartmax=5000000        ! set this similar to 8*ngridmax (?)

…

/

 

3. To SELECT regions where trace particles are started

To avoid having a large number of useless trace particles to read one can limit the regions where they are started.   Example, which selects the positions of core 70 and 74 at dump 22:

&TRACER_PARAMS

nregions=2 r=2*16e-3                                  ! 128 kAU sides

x_center=0.81935633,0.33692739  

y_center=0.78175744,0.68703173

z_center=0.20594852,0.63185244                   ! cores 70+74 at no=22

mass_pp=1e-8                            ! 1e-8 mass units per particle

/

One may need to experiment with mass_pp to get a suitable number of particles; a lot are needed to make sure there are some close to the sinks


How to monitor the runs

All text output:

While you're running the code, you are probably also tasked with monitoring your run. Depending on your specific setup, a number of files can be created, overwritten or appended to, and you may find it overwhelming at start. However, a main, plain-text, per-run output file will always be produced, and it will keep getting larger as your run progresses, since new output from the code will be appended to it as long as the code runs. So a solid place to start is by viewing this file, by doing

 

tail -f log 

and staring at that for a few seconds, as your simulation starts. The preceding '-f' switch instructs the tail command to not stop showing you the file when 'end of file' is reached, but rather to wait for additional data to be appended to the input. Notice that the first lines passing by contain some information about the job starting, and you may also recognize the reciting of the running parameters that you placed in the input.nml file. After a few seconds, the output will change and rather look more like this:

 

 

00: accretion: level, nacc, phi_min, rho_max, rho_limit = 22 2559 0 0 -4.211E+01 3.292E+09 1.407E+10
00: Fine step= 1916182 t= 4.332000E-01 dt= 4.745E-23 Cv= 2.16E+15 Cb= 3.12E+15 Cg= 8.00E-01 Cr= 3.68E-02 Cp= 6.19E+01 Pos= 0.2818298 0.2724915 0.9918518 a= 1.000E+00 mem=55.5% 59.6%
00: accretion: level, nacc, phi_min, rho_max, rho_limit = 22 2443 0 0 -4.209E+01 3.233E+09 1.407E+10
00: accretion: level, nacc, phi_min, rho_max, rho_limit = 21 102658 0 49 -4.462E+01 2.718E+10 1.407E+10
20: Center star: level, mass, pot, rho, r, v : 21 21 3.911E-08 -4.462E+01 2.271E+10 0.278764096 0.272396943 0.996069433 -5.625E+14-5.497E+14-2.010E+15

 

Since the tail command ignores end of file, you will have to either toss it on a different screen and interrupt it later, or hit ^c (<Control>+<c>) to interrupt it.

 

 

Targeted queries:

 

All this seems trivial, but this holds only for the start: the file quickly becomes very large and tedious to read. Therefore it is useful to know how to grep for the essential information you're interested in: grep 's basic function is string matching in text files and, since the log file is a text file, this turns it into a powerful tool: 

 

 

Command Effect
grep dump log Check output dumps
grep Restart log Check which input dump
grep dump_all log Check for main dumps 

 

If you do not know exactly what you are looking for, or want to look for many things at once, remember that commands of the grep family understand regular expressions:  

 

egrep 'Main|Load|dump' log Check run progress
egrep 'mus/p|dump' log Check run speed
egrep 'mus/|cost' log Check run performance

 

 

Use those in combination with shell commands and the shell itself, and suddenly grep becomes very powerful :

 

 

ls -l `grep -L dump_all log_* | sed 's/log/*/'` Lists all files (in_*, ramses*, log_*), which do NOT include a dump
\\rm `grep -L dump_all log_* | sed 's/log/*/'` Removes all files (in_*, ramses*, log_*), which do NOT include a dump
grep -v '!d' in* | grep delta_tout - See timesteps used, ignore comments (no sane way to escape '!' in BASH)

 

 

Once you can understand the above syntax and decide that you want to get deeper 'in the zone', you may want to skim through the manual pages of sed and gawk.

 

 

Interfere with job parameters normally requiring restart (without losing your job slot in the queue):

 

You can touch a couple of *.flag files in your working directory (assuming you are executing with jobcontrol.csh), which can be extremely useful. Here is a list of some useful commands:

 

 

Command Effect
   
touch scratch.flag Force to dump an output file after the next main step, which will then be overwritten once the regular output time is reached
touch restart.flag Forces the code to restart a run two minutes after it crashed. Especially, it avoids that you lose your slot when running on a cluster after your code crashed. Extremely useful!
touch stop.flag Forces the code to stop after the next main step
touch load_balance.flag Triggers the code to load balance
echo 10 > nremap.flag Set nremap=10
echo 20 > foutput.flag Set foutput=20
echo ../run42 > cd.flag; touch stop.flag Switch run to another directory. NOTE! ../run42 must contain slurm.csh/intel.csh, input.nml, data/, etc
 echo 'command' > cmd.flag Pass single command
echo 'source' > source.flag  Source a file

 

Load balancing and efficiency: