ECCE:Where is everything?
Here are some notes on where some things are that a newbie might not know where to look when first starting out.
Where is the best top-level place to go for ECCE information?
The best place to start for pretty much everything ECCE is: https://www.ecce-eic.org/
Where are the tutorials?
The tutorials for ECCE can be found here: https://ecce-eic.github.io/tutorials_landing_page.html
(Note that these are pretty minimal and look to be basically a copy of the tutorials here.)
It may be useful to look at Fun4all documentation. In particular for sPHENIX since the ECCE code base is a clone of the sPHENIX code. Some information on the sPHENIX Fun4all implementation can be found here.
Where is the source code?
Good question! End users should probably start by looking at the macros repository: https://github.com/ECCE-EIC/macros.
This has the higher-level configuration options.
You can find some stuff that looks like it is the core software in: https://github.com/ECCE-EIC/coresoftware.
Where is the node I should log into at BNL?
To connect from outside of BNL you can do this:
ssh <username>@ssh.sdcc.bnl.gov
and once you are logged in there:
ssh eic0101
note that you can go to any of eic0101-eic0111
I would strongly recommend using the NX server though. This will give you a full Linux desktop environment which will preserve your windows across logins. Documentation for this can be found here: https://www.sdcc.bnl.gov/information/services/how-use-nx-sdcc. IMPORTANT: When presented with the option for connecting to "BNL, SDCC, nxcampus ..." or "Choose a node", choose the latter. The former will not allow ssh access to the eic0101 and friends hosts.
Where is the node I should log into at JLab?
To connect from outside of JLab you can do this:
ssh <username>@login.jlab.org
and once you are logged in there:
ssh ifarm
note that you can go to any of ifarm1801, ifarm1802, ifarm1901. (The ifarm alias will take you to one of these.)
Where is the singularity container?
The basic singularity container we use to run jobs not on the standard SDCC systems can be accessed via cvmfs and run like this:
singularity shell -B /cvmfs:/cvmfs /cvmfs/eic.opensciencegrid.org/singularity/rhic_sl7_ext.simg
Where are the generated events files?
Right now, the ones I know about are on the BNL disk system in the directory tree:
/gpfs/mnt/gpfs02/eic/DATA
e.g. files
-rw-r--r-- 1 seidl eic 5308177512 Aug 16 2020 /gpfs/mnt/gpfs02/eic/DATA/YR_SIDIS/ep_18x100/ep_noradcor.18x100_run001.root -rw-r--r-- 1 seidl eic 5308309908 Aug 16 2020 /gpfs/mnt/gpfs02/eic/DATA/YR_SIDIS/ep_18x100/ep_noradcor.18x100_run002.root -rw-r--r-- 1 seidl eic 5312634667 Aug 16 2020 /gpfs/mnt/gpfs02/eic/DATA/YR_SIDIS/ep_18x100/ep_noradcor.18x100_run003.root ...
Some of these have been copied over to JLab and are accessible from (almost) anywhere via xrootd:
> ls -l root://sci-xrootd.jlab.org//osgpool/eic/DATA/YR_SIDIS/ep_18x100 total 117802604 -r--------+ 1 davidl da 361526619 Jun 5 23:33 ep_noradcor.18x100highq_run001.root -r--------+ 1 davidl da 361494028 Jun 5 23:33 ep_noradcor.18x100highq_run002.root -r--------+ 1 davidl da 361574095 Jun 5 23:33 ep_noradcor.18x100highq_run003.root -r--------+ 1 davidl da 362025081 Jun 5 23:32 ep_noradcor.18x100highq_run004.root -r--------+ 1 davidl da 361516640 Jun 5 23:33 ep_noradcor.18x100highq_run005.root ...
If you are on a jlab farm computer, you should access them via the /work/osgpool directory.
Where are the simulation campaign job scripts?
You can find the production scripts here: https://github.com/ECCE-EIC/productions
Note that these contain some top-level scripts that will call the site-specific script automatically.
Where can I access the S3 storage space at BNL?
Anyone can access the files as read-only using:
username: eicS3read accesskey: eicS3read
At this point in time write access is restricted to only a few people. Please check with the Simulations WG if you feel you need write access. Here are the general instructions:
1. You'll need to install an S3 compatible client. The recommendation is to use MinIO. (n.b. if you are doing this from a mac, I recommend installing via homebrew). Note that if you are on a Linux computer and have cvmfs mounted, you can just use the binary installed there:
alias mcs3=/cvmfs/eic.opensciencegrid.org/ecce/gcc-8.3/opt/fun4all/utils/bin/mcs3
n.b. You'll notice the name is actually "mcs3" instead of the default "mc". This is to disambiguate from the other "mc".
2. Configure the client to point to the right host:
$ mcs3 config host add S3 https://dtn01.sdcc.bnl.gov:9000/ eicS3read eicS3read # <username> <accesskey>
3. Test it with something like:
$ mcs3 ls S3/eictest
Run mcs3 --help for help on how to use the mc program to copy files in and out of the S3 storage.
NOTE: Please be aware that the mc program mentioned above comes from MinIO and is NOT the midnight command program commonly installed on Linux systems. Even though both of these deal with filesystems, they are very different things!
Where is the magic to open a file on S3 directly from root?
Magic? What magic? There's nothing magic about it. CERN just has an infinite well of manpower to apply to root so they can support every obscure protocol or format invented. (No, I'm not bitter, but I am being sarcastic).
You can access .root files on S3 storage directly from your root session using the TS3WebFile class. For this to work, you should set the userKey and accessKey in your environment first. For example:
> setenv S3_ACCESS_KEY eicS3read > setenv S3_SECRET_KEY eicS3read > root -l root[0] auto f = new TS3WebFile("s3://dtn01.sdcc.bnl.gov:9000/eictest/ECCE/MC/ana.14/5f210c7/SIDIS/pythia6/ep_18x100highq2/DST_SIDIS_pythia6_ep_18x100highq2_039_0004000_01000.root");
You can then use it as though the file were local.
Where is the BNL S3 storage Usage page?
Look here: https://monitoring.sdcc.bnl.gov/pub/grafana/d/P3NP5QZnk/eic-s3?orgId=1&from=now-70d&to=now
Where can I access the xrootd server at JLab?
If you want to read from the server from outside of JLab, make sure you have xrootd installed and do this:
export LD_PRELOAD=/usr/lib64/libXrdPosixPreload.so ls root://sci-xrootd.jlab.org//osgpool/eic ls root://dtn-eic.jlab.org//work/eic2 ls root://dtn-eic.jlab.org//work/eic3
Or event easier, just open the file from inside root:
root[0] auto f = new TNetXNGFile("root://sci-xrootd.jlab.org//osgpool/eic/DATA/YR_SIDIS/ep_18x100/ep_noradcor.18x100_run002.root")
At one point, if you were on one of the JLab farm machines you would need to use a slightly different host (this has since been fixed, but is kept here for archival purposes):
ls root://sci-xrootd-ib.qcd.jlab.org//osgpool/eic
If you need to write to the xrootd server, you can only do this from a computer at JLab. The directory tree starts at:
/work/osgpool/eic /work/eic2 /work/eic3
Note that this directory is rsync'd to the actual server node (dtn1902) once every 4 hours. If you absolutely must publish something immediately, you can log in to dtn902 and put it in /osgpool/eic, but know that it will be removed during the next rsync. In these cases, you should put it there *and* on the work disk. I don't know how often the rsync job runs, but it is no more than once every 4 hours with 4pm EST being one of the sync times.
Where is the Computing Plan document?
Right now, the computing plan is a skeleton set up as an Overleaf document that is linked to a GitHub repository. Peter Steinberg set these up and gave David and Cristiano editing rights on Overleaf. As such, the Overleaf document has restricted access at the moment. The GitHub repository though can be found here:
https://github.com/ecce-notes/ecce-note-comp-2021-01
The URL to the Overleaf document is here, but if you do not have access you will just get a message that it is a restricted page.
https://www.overleaf.com/project/60c8ec8726f8b14fa6a861ee