Note
This document describes the procedures for observing with the auxiliary telescope in the early stages of system integration and commission. The procedures are likely to change very rapidly in these stages so it is recommended that users keep a close eye on the document before doing any observations. Here we will also document some troubleshooting to commonly found issues. In case of questions contact the document authors.
1 Introduction¶
It is important to emphasize that this document focus on early operations of the Auxiliary Telescope, with a significant part of the hardware and software still in need of considerable advancements. For instance, we are still in the process of obtaining and validating the telescope pointing model and optical alignment.
There is also some significant distinctions in the way we operate the telescope at this early stages and the way we plan to operate during commissioning and, even more, during normal operations. At this point we maintain a “low-level” kind of operations, both due to the need to be in full control of all the components and simply because the lack of high level operation software available. In fact, we are in the process of developing high level software that will improve considerably the user experience as well as pave the way for the development of “SAL Scripts” that ultimately will power the observatory Script Queue.
Furthermore, some issues pointed in this document may have been corrected in the meantime so it is very likely that the document will contain some outdated information. We will make an effort to document corrections as they occur but be aware that changes may happen faster then we are able to update this document. Users may want to check for more updated versions in the edition navigation page or check the github repository for any pending pull requests.
2 Network architecture and connectivity¶
From the user perspective, the summit network can broken down into two main systems; the campus and the control network. Regardless if you are at the summit, the base or in Tucson if you are connected to the LSST network (e.g. LSST-WAP) you will have access to the summit network. The control network on the other side is only accessible from bastion computers on the summit. These bastions are connected to both the campus network and the control network, thus giving users access to the control network through ssh tunneling.
Attention
An important aspect of the control network is that it does not have access to the internet. This does create some issues, for instance, to update software on computers connected solely to the control network on the fly.
A list of the host computers IP address can be found here.
2.1 Useful ssh Tunneling rules¶
Paste the following rules to ~/.ssh/config
on the computer you plan on using for the
observations.
This rule will enable access to a jupyter notebook server. Currently each user is given a jupyter server running on a separate docker container. Each container has it’s own IP address on the control network and each user receive it’s own token to access the server. In the future we will use DM LSP system. Instructions will be updated accordingly.
Host chile-jupyter
Hostname 139.229.162.118
User <username>
LocalForward 8885 192.168.1.2??:8885
This rule will enable connection to the GenericCamera live view server.
Host chile-liveview
Hostname 139.229.162.118
User <username>
LocalForward 8881 192.168.1.218:8888
This rule is to enabled wget
on GenericCamera to download fits images.
Warning
This will be deprecated once proper LFA handling is implemented.
Host chile-wget
Hostname 139.229.162.118
User <username>
LocalForward 8001 192.168.1.216:8000
This rule is to connect to the machine that hosts the liveview server (see 6.2 Live view server is not responding).
Host liveview-host
Hostname 139.229.162.114
User <username>
Once the rule is appended to ~/.ssh/config
it should be possible to just entry
ssh <Host>
. The user will enter the bastion machine specified in the Hostname
entry
on the rule and the tunnel specified on LocalForward
will be set and the service will
be available on the user localhost:<local-port>
. The format of the LocalForward
parameter
is <local-port> <remote-host>:<remote-port>
. Feel free to change <local-port>
to any
suitable range on the machine used for the observations.
It is also possible to send the ssh command to
the background while tunneling by adding the options -N -f
.
To log in to the notebook server;
ssh -N -f chile-jupyter
and open the address localhost:8885
on a browser.
To open the liveview;
ssh -N -f chile-liveview
and open the address localhost:8881
on a browser.
To download fits images taken with the Generic Camera;
ssh -N -f chile-wget
wget http://localhost:8001/<image_name>
3 Monitoring and Interactive tools¶
Here is a list of currently available tools to monitor and interact with the system and a quick overview of how to use them. More details on how to perform specific tasks with the telescope are described furthermore.
3.1 Engineering and Facility Database (EFD)¶
The EFD is responsible for listening to and storing all data (Telemetry, Event, Commands and Acknowledgements) sent by components and users interacting with the components. The most recent incarnation of the EFD uses an influx database to store the data in a time-series database. See sqr-034 for details about the EFD implementation.
Data from the summit is available on chronograf and can be accessed at http://summit-chronograf-efd.lsst.codes/.
On the left hand side of the web page there is a tab with links to the different actions one can
perform with chronograf. Probably the two most useful tabs are Dashboards
and
Explore
; the first will take you to a list of available dashboards created by users that
gathers important information about the subsystems. The “Summary state monitor” is a good
example of general information one would be interested in during an observing night.
Important
The chronograf dashboards are shared between all users. If you feel like you need to make any change to the already existing dashboards, make sure to create a copy of the one you plan on editing and change that one instead.
The Explore
tab let users perform free hand queries to the database using a sql-like
language.
3.2 Jupyter Lab Servers¶
Jupyter notebooks are very popular amongst astronomers, specially in the LSST collaboration. They provide an easy and simple way of running Python code interactively through a web browser and give the additional benefit of combining documentation (using markdown language) and code. Users can run Jupyter notebook server locally on their own machine or on servers, enabling a cloud-like environment with access to powerful computing or, in the case of the LSST control system, to specialized functionality.
The most recent incarnation of Jupyter notebooks is Jupyter Labs. It provides access to a similar environment as that of a notebooks but with additional functionality.
We envision that Jupyter Lab servers will be a fundamental part of LSST control system, enabling users to perform low and high level operations in a well-known and interactive environment.
The current system deployment uses individual Jupyter Lab servers for each user running out of individual docker containers. In the nearby future, the plan is to start using DM LSP ([LDM-542]) environment to manage user servers and environment.
The first step to access a Jupyter Lab server is to tunnel using the IP and token information
provided for each user by the Telescope & Site team point of contact. Once the ssh tunnel is
up it should be possible to open a web page on localhost:<local-port>
. After entering
the provided token, you should see the Jupyter Lab interface.
In the left hand there is a file browser navigation screen which, by default, have two directories;
develop
and repos
. The develop
directory is a bind mount on the server that
runs the Jupyter Lab containers. Inside there is a repository for notebooks
(develop/ts_notebooks
) with examples and work notebooks from other users (separated by
username). Feel free to browse and edit any notebook within this repo. Be sure to commit and push
any work you may have done and eventually make Pull Request to the original repo so other users can
see and use work that was done.
The repos
directory, on the other hand, contains some basic repos that ships with the
container with the T&S software used to power the control system. Any data in this directory,
or in the home folder, will be lost if the container is restarted. It is advisable to only keep
important data inside the user designated folder (e.g. develop
).
3.3 LSST Operations and Visualization Environment (LOVE)¶
Note
TBD
3.4 Script Queue¶
Note
TBD
4 Auxiliary Telescope Commandable SAL Components (CSCs)¶
This diagram shows all the CSCs (light blue boxes) that are currently being used at the summit, their connections, the users jupyter servers and the salkafka producer that is responsible for capturing all SAL traffic, serialize it in avro an send it over to kafka to be inserted on the influx database (see sqr-034 for more information about the EFD).
5 Basic Operations Procedures¶
This section explains how one can perform the basic operations with the telescope using the Jupyter Lab server. Here we assume you was able to login to the server assigned to you and either open an existing notebook or create an empty one to work with.
Important
You will noticed that most of the tasks shown here will have two ways of being performed, using the high level software and low level software. At the time of this writing the low level controls, where the user sends commands to individual CSCs and have little feed back, are the only ones tested on sky. The high level operations, as one can see, provides a much easier way to execute these operations, but have not been sanctioned yet.
Note
Notebooks with the procedures can be found on the develop/ts_notebooks/examples
folder.
5.1 Startup procedure¶
At the end of the day, before observations starts, most CSCs will be unconfigured and
in STANDBY
state. The first step in starting up the system is to enable all CSCs.
Putting a CSC in the ENABLED
state required the transition from STANDBY
to
DISABLED
and then from DISABLED
to ENABLED
. When transitioning from
STANDBY
to DISABLED
it is possible to provide a settingsToApply
that selects
a configuration for the CSC. Some CSCs won’t need any settings while others will.
It is possible to check what are the available settings by looking at the settingVersions
event.
After all CSCs are in the ENABLED
state, we proceed to open the dome slit, setup the
ATPneumatics and startup the ATAOS. After the dome has finished opening, the telescope
covers are opened and the procedure is complete.
Note
In some cases, if the dome controller is restarted, the dome will need to be homed. At the time of this writing there is no fixture that allow the procedure to be executed without human intervention. The process is documented in 6 Troubleshooting.
The startup procedure is encapsulated in the task startup()
from the ATTCS
class provided
by ts_standardscript
, which is available in the Jupyter server. The high level operation can be
run by doing the following:
from lsst.ts.standardscripts.auxtel.attcs import ATTCS
attcs = ATTCS()
await attcs.start_task
settings = {"atdome": "test.yaml":, "ataos" : "measured_20190908.yaml", "athexapod": "Default1"}
await attcs.startup(settings)
Although this procedure implements all the basic steps and checks, it has not been tested at the telescope yet. For now the sanctioned procedure is to execute this series of commands on a jupyter notebooks. This is ta
from lsst.ts import salobj
Setup remotes for all the AT components
d = salobj.Domain()
atmcs = salobj.Remote(d, "ATMCS")
atptg = salobj.Remote(d, "ATPtg")
ataos = salobj.Remote(d, "ATAOS")
atpne = salobj.Remote(d, "ATPneumatics")
athex = salobj.Remote(d, "ATHexapod")
atdome = salobj.Remote(d, "ATDome", index=1)
atdomtraj = salobj.Remote(d, "ATDomeTrajectory")
await asyncio.gather(atmcs.start_task,
atptg.start_task,
ataos.start_task,
atpne.start_task,
athex.start_task,
atdome.start_task,
atdomtraj.start_task)
Enable all components.
await asyncio.gather(salobj.set_summary_state(atmcs, salobj.State.ENABLED, timeout=120),
salobj.set_summary_state(atptg, salobj.State.ENABLED),
salobj.set_summary_state(ataos, salobj.State.ENABLED, settingsToApply="measured_20190908.yaml"),
salobj.set_summary_state(atpne, salobj.State.ENABLED),
salobj.set_summary_state(athex, salobj.State.ENABLED, settingsToApply="Default1"),
salobj.set_summary_state(atdome, salobj.State.ENABLED, settingsToApply="test.yaml"),
salobj.set_summary_state(atdomtraj, salobj.State.ENABLED))
Open dome shutter
await atdome.cmd_moveShutterMainDoor.set_start(open=True)
Wait until the dome in fully open, then execute the next step to open the telescope cover
await atpne.cmd_openM1Cover.start()
Finally, enable the ATAOS correction for the M1 pressure.
await ataos.cmd_enableCorrection.set_start(m1=True)
If the dome needs to be homed then run the following command:
await atdome.cmd_homeAzimuth.start()
5.2 Pointing¶
The action of pointing and start tracking involves sending a command to the pointing component
(ATPtg
) and then waiting for the telescope and dome to be in position while making sure
all components remain in ENABLED
state.
When using the ATTCS
class it is possible to perform the task with the following set of
commands:
import asyncio
import astropy.units as u
from astropy.time import Time
from astropy.coordinates import ICRS, Angle
from lsst.ts.standardscripts.auxtel.attcs import ATTCS
Initializing ATTCS
class.
attcs = ATTCS()
await attcs.start_task
Run the slew task. This task will only finish when the telescope and the dome are positioned. Also, this will set the sky position angle (angle between y-axis and North) to be zero (or 180. if zero is not achievable). It is posssible to user RA/Dec and rotator as hexagesimal strings or floats (and mix and match them). For instance,
await attcs.slew_icrs(ra="20:25:38.85705", dec="-56:44:06.3230", sky_pos=0., target_name="Alf Pav")
or
await attcs.slew_icrs(ra=20.42746, dec=-56.73508, sky_pos=0., target_name="Alf Pav")
It is also possible to slew to an RA/Dec target and request a rotator position. To do that use the
rot_pos
argument instead of sky_pos
. Note that this will request rot_pos
at the
requested time, which will change as the telescope track the object.
await attcs.slew_icrs(ra="20:25:38.85705", dec="-56:44:06.3230", rot_pos=0., target_name="Alf Pav")
As with the 5.1 Startup procedure procedure, this task has not been tested at the telescope yet. For now the sanctioned procedure is to execute the slew and track by commanding the pointing component individually. This also means the user have to handle the rotator angle computations. In this mode we only support setting the rotator position to a certain angle. Due to some binding issues we have been trying to keep the rotator as close to zero as possible.
import logging
import yaml
import numpy as np
from matplotlib import pyplot as plt
import astropy.units as u
from astropy.time import Time
from astropy.coordinates import AltAz, ICRS, EarthLocation, Angle, FK5
import asyncio
from lsst.ts import salobj
from lsst.ts.idl.enums import ATPtg
from astropy.utils import iers
iers.conf.auto_download = False
d = salobj.Domain()
atmcs = salobj.Remote(d, "ATMCS")
atptg = salobj.Remote(d, "ATPtg")
ataos = salobj.Remote(d, "ATAOS")
atpne = salobj.Remote(d, "ATPneumatics")
athex = salobj.Remote(d, "ATHexapod")
atdome = salobj.Remote(d, "ATDome", index=1)
atdomtraj = salobj.Remote(d, "ATDomeTrajectory")
await asyncio.gather(atmcs.start_task,
atptg.start_task,
ataos.start_task,
atpne.start_task,
athex.start_task,
atdome.start_task,
atdomtraj.start_task)
The next cell sets the observatory location. This is needed to compute the Az/El of the target to set the camera rotation angle. We are trying to keep the angle close to zero.
location = EarthLocation.from_geodetic(lon=-70.747698*u.deg,
lat=-30.244728*u.deg,
height=2663.0*u.m)
This next cell defines a target.
ra = Angle("20:25:38.85705", unit=u.hour)
dec = Angle("-56:44:06.3230", unit=u.deg)
target_name="Alf PAv"
radec = ICRS(ra, dec)
This next cell will slew to the target and set the camera rotation angle
to zero. Not that, unlike attcs.slew_icrs
this call returns right away and does not
provide any feedback of when the telescope and dome arrives at the requested position.
# Figure out what is the rotPA that sets nasmith rotator close to zero.
time_data = await atptg.tel_timeAndDate.next(flush=True, timeout=2)
curr_time_atptg = Time(time_data.tai, format="mjd", scale="tai")
print(curr_time_atptg)
coord_frame_altaz = AltAz(location=location, obstime=curr_time_atptg)
alt_az = radec.transform_to(coord_frame_altaz)
await atptg.cmd_raDecTarget.set_start(
targetName=target_name,
targetInstance=ATPtg.TargetInstances.CURRENT,
frame=ATPtg.CoordFrame.ICRS,
epoch=2000, # should be ignored: no parallax or proper motion
equinox=2000, # should be ignored for ICRS
ra=radec.ra.hour,
declination=radec.dec.deg,
parallax=0,
pmRA=0,
pmDec=0,
rv=0,
dRA=0,
dDec=0,
rotPA=180.-alt_az.alt.deg,
rotFrame=ATPtg.RotFrame.FIXED,
rotMode=ATPtg.RotMode.FIELD,
timeout=10
)
In case you need to stop tracking, use the next cell!
await atptg.cmd_stopTracking.start(timeout=10)
Use the next cell in case you need to offset to center the target on the FoV.
This will set total offsets. So, if you say el=0
and az=-30
and
then later you do el=30
and az=0.
, it will reset the offset in
azimuth to zero and make an offset of 30arcs in elevation.
await atptg.cmd_offsetAzEl.set_start(el=0.,
az=-100.,
num=0)
If you want to make persistent offsets you can use the following method.
await atptg.cmd_offsetAzEl.set_start(el=0.,
az=-100.,
num=1)
If you want to add your offset to a pointing model file, do the following.
await atptg.cmd_pointNewFile.start()
await asyncio.sleep(1.)
await atptg.cmd_pointAddData.start()
await asyncio.sleep(1.)
await atptg.cmd_pointCloseFile.start()
5.3 Using GenericCamera Liveview¶
The GenericCamera liveview mode can be used for quick look of telescope pointing, to check that a target is centered on the field after a slew was performed or to quickly evaluate the optics. When liveview mode is activated, the GenericCamera CSC will start a web server and start streaming the images taken with the selected exposure time. To visualize the images streamed by the CSC we created a separate web server that connects to the CSC stream and display the images. This is illustrated in the following diagram.
This is how to start live view in the GenericCamera;
from lsst.ts import salobj
import asyncio
d = salobj.Domain()
r = salobj.Remote(d, "GenericCamera", 1)
await r.start_task
Before starting live view, make sure to enable the CSC with the 4x4 binning settings.
await salobj.set_summary_state(r, salobj.State.ENABLED, settingsToApply="zwo_4x4.yaml")
When starting live view mode the user must specify the exposure time, which also sets the frame rate of the stream. So far, we have tested this with up to 0.25s exposure times.
await r.cmd_startLiveView.set_start(expTime=0.5)
Once live view has started, make sure you have the live view ssh rule running,
then you should be able to access the live view server by opening localhost:8881
on a
browser.
Attention
The web server that streams the live view data is not in a stable state. If the browser is not loading the page you may have to check the process running the live view server and restart it. Check the 6 Troubleshooting session for more information about how to restart it.
To stop live view, you just need to run the following command.
await r.cmd_stopLiveView.start(timeout=10)
5.4 Using GenericCamera to take (fits) images¶
The GenericCamera CSC was designed to emulate the same behaviour as that of the ATCamera and MTCamera CSCs. That means the commands and events have the same name and, as much as possible, the same payload and the events marking the different stages of image acquisition are also published at approximately the same stages.
To take an image with the GenericCamera first make sure that live view is not running. If live view is running the take image command will be rejected. Then, to take an image:
r.evt_endReadout.flush()
await r.cmd_takeImages.set_start(numImages=1,
expTime=10.,
shutter=True,
imageSequenceName='alf_pav'
)
end_readout = await r.evt_endReadout.next(flush=False, timeout=5.)
print(end_readout.imageName)
You can download the image on your notebook server using the following command;
import wget
filename = wget.download(f"http://192.168.1.216:8000/{end_readout.imageName}.fits")
Note that this only works from the Jupyter notebook server as it is connected to the control network.
You can download the image produced by the command above on your local computer
by running the following wget
on the command line (make sure the
chile-wget ssh rule is running).
wget http://localhost:8001/<image_name>
6 Troubleshooting¶
Here we describe some of the currently known issues and how to resolve them.
6.1 ATMCS won’t get out of FAULT State¶
In some situations the ATMCS will go to FAULT
state and it will reject the standby
command,
preventing to recover the system. We have been working on tracking this issue down but,
should you encounter this issue it is possible to recover by pressing the e-stop button on
the main cabinet (close to the telescope pier) and on the dome cabinet (east building wall on lower
level) and then executing the E-stop reset procedure. This should clear the FAULT
state and leave the ATMCS in STANDBY
.
6.2 Live view server is not responding¶
The live view server that is responsible for receiving images from the GenericCamera and streaming it to a user we browser is still in a very rough shape. The server connect to the GenericCamera over a TCP/IP socket and provides an image streaming server using a simple tornado web server. The connector that is responsible for receiving images from the CSC is still not capable of handling a dropped connection. That means, if there is a connection issue it is not capable of regenerating and continuing operations. Moreover, if the liveview mode is switched off on the CSC, the connection is also dropped and the live view server is also not capable of reconnecting.
If any of this happens the easiest solution is to restart the live view server. For that, you will need to connect to the container running the liveview server, kill the running procedure and restarting the process. This can be summarized as follows;
ssh liveview-host
docker attach gencam_lv_server
python liveview_server.py
Once the live view server is running you can detach from the container by doing Crtl+p Crtl+q
.
6.3 Building CSC interfaces¶
To communicate with a CSC, we use a class provided by salobj
called Remote
.
As you can see on previous sessions, the Remote
receives the name of the CSC as
an argument, which ultimately, specifies the interface to load.
In order for the Remote
to load this interface it needs to have the set of
idl libraries available. In some cases, the interface for the CSC that you plan
on communicating may no be readily available on the Jupyter notebook server. If
this is the case you will see an exception like the following when trying to
create the Remote
.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-470a83f93eee> in <module>
----> 1 r = salobj.Remote(salobj.Domain(), "Component")
~/repos/ts_salobj/python/lsst/ts/salobj/remote.py in __init__(self, domain, name, index, readonly, include, exclude, evt_max_history, tel_max_history, start)
137 raise TypeError(f"domain {domain!r} must be an lsst.ts.salobj.Domain")
138
--> 139 salinfo = SalInfo(domain=domain, name=name, index=index)
140 self.salinfo = salinfo
141
~/repos/ts_salobj/python/lsst/ts/salobj/sal_info.py in __init__(self, domain, name, index)
152 self.idl_loc = domain.idl_dir / f"sal_revCoded_{self.name}.idl"
153 if not self.idl_loc.is_file():
--> 154 raise RuntimeError(f"Cannot find IDL file {self.idl_loc} for name={self.name!r}")
155 self.parse_idl()
156 self.ackcmd_type = ddsutil.get_dds_classes_from_idl(self.idl_loc, f"{self.name}::ackcmd")
RuntimeError: Cannot find IDL file /home/saluser/repos/ts_idl/idl/sal_revCoded_Component.idl for name='Component'
But, instead of Component
it will be the name of the CSC you tried to connect to.
To resolve this issue, you will need to build the libraries. You can do that by putting the
following commands on a notebook cell:
%%script bash
make_idl_files.py <Component>
Again, you will need to replace <Component>
by the name of the CSC.
7 Advanced Operations Procedures¶
This section explains advanced procedures which may be required, specifically during commissioning or during servicing.
7.1 E-stop Reset Procedure¶
If an E-stop has been activated (or possibly an L3 limit switch hit) then the following procedure must be followed to free the system. i
- Remove the issue that caused the E-stop to be activated.
- Activate both E-stops, the one on the telescope control cabinet, and the one on the dome control cabinet. Both will glow red.
- Release dome E-stop by turning clockwise a quarter turn or so
- Release main cabinet E-stop in the same manner
- Press the blue start button on the dome cabinet
- Press the blue start putton on the telescope control cabinet
If this is done correctly, all three LEDs on the Pilz devices in both cabinets should be brightly illuminated, as seen in the following image. If only the main cabinet is depressed, then only the top light is bright. If only the dome cabinet is pressed, the top and bottom lights are bright.
Note that if both E-stops are never activated simultaneously then the system will not reset.
Note
All L3 limit switches and E-stops are run through the smart relay system. This means that if an L3 limit (which is a hardstop at the extreme end of travel of the elevation, azimuth, M3 rotator and nasmyth axes) is contacted, then it will look as if an E-stop was pressed. To identify which L3 limit was hit, one must examine the interface of the smart relay. Any active signal will not have a filled box around the central number. The central number is then mapped to a L3 using the Auxiliary Telescope Electrical Diagram (Document-26731)
7.2 Viewing the ATMCS LabVIEW GUI¶
This is the GUI developed by Rolando Cantarutti and Omar Estay to display and interact with the telescope mount at a low-level (directly from the cRIO with no SAL communication). This is not meant to be used for regular operations.
Connections can currently be accomplished in two ways, the first uses a VNC connection to a windows machine currently located in the AuxTel building. The second is to login remotely using the LabVIEW Connector (requires Internet Explorer and a specific driver).
Open an ssh tunnel to the ATMCS windows machine.
ssh -L 5900:192.168.1.49:5900 saluser@139.229.162.118
Using RealVNC (which is required due to encryption although other clients might work) you can then connect to ‘localhost’ on port 5900
Enter credentials (ask Patrick or Tiago)
If the GUI is not already open, then open internet explorer and enter the following address in the address bar.
http://192.168.1.47:8000/atmcs.html
One can also install the LabVIEW remote panel on their Windows machine (Internet Explorer only) then open a tunnel to the above IP on port 8000. This requires the download from NI, then you’ll have to open the tunnel using PuTTy (or equivalent). Details will be included in the ATMCS documention upon delivery. We don’t recommend this method unless absolutely necessary.
7.3 Resetting the ATHexapod IP Connection¶
For reasons which are under investigation, occasionally after a power cycle (we think) the hexapod TCP/IP connection goes down. To reset it, one must connect a serial port to the device, establish a connection using the (windows) PIMikroMove software, close the connection, then power cycle the controller. Power cycling can be done remotely (using the switched PDU). Until this problem is resolved, we’ve left a permanent serial (RS-232) connection to a local windows machine.
Follow these steps to re-establish TCP/IP connection:
- Establish VNC connection which is the same as the ATMCS GUI VNC shown here.
- Open PIMikroMove software from start menu
- Open new connection and select C-887 controller, and click connect
- Close connection
- Power cycle controller (which will cause the hexapod to lose the reference position)
- Put hexapod CSC in enabled state (which will send the hexapod to the reference position)
- Move hexapod to desired position
7.4 Mitutoyo Micrometers and Copley Controller Connections¶
The mitutoyo devices (when connected) are currently controlled through the Copley PC (located in the bottom of the telescope cabinet). Connection to this Windows machine uses TeamViewer. Contact Patrick for credentials.
More details to follow.
7.5 Telescope Cabinet Switchable PDU¶
In the event that a controller in the cabinet needs power cycling remotely, this may be done by logging into the switchable PDU mounted in the cabinet. The IP and connection info can be found here
- Channel 1 is connected to the main 24V supply. This will power off the cRIO (and possibly the Copley controllers, Pilz Device, and Smart Relay).
- Channel 2 is connected to powerbar in bottom of cabinet, which has the 220V connection to the mount (which powers the Embedded PC for the Collimation Camera) as well as the hexapod connected to it.
7.6 AT Dome Communication Loss¶
If during operation the dome controllers lose connection, which is seen either from the software, or the push-buttons fail to work, then this procedure must be followed. The dome has two types of communication failsures
- The two cRIOs lose communication with each other (notably the cRIO in the rotating part of the enclosure loses connection with the bottom box and may be blocking the connection). If the CSC is connected and in disabled or enabled state, then this will be shown in the scbLink event (must verify). Also, this can be seen in the Main Box Dome Control LabVIEW Remote on the ATMCS machine as the TopComms light in the bottom left corner.
- Press the reset button on the cRIO inside the electrical cabinet on the rotating part of the dome (near the lower shutter) to resolve this issue
- The Main cRIO (located in the dome electrical cabinet on the first floor) is not correctly releasing the TCP/IP connection. This can be observed by being able to ping the box but not open a telnet connection (port 17310). Also, the HostComms light will be illuminated in the Main Box Dome Control LabVIEW remote.
- Press the reset button on the cRIO in the dome cabinet on the first floor to resolve this issue