The First Conference I Have Ever Attended – Experience at SIGGRAPH (Part 1)

HPC Support Member Jack Chen reviews his first conference, the SIGGRAPH, and offers an insight into large-scale projects.

JackChen

Last week I attended the first conference in my life – SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques). It is THE conference for computer graphics (CG), a broad academic topic that includes special effects, 2D/3D animation, VR, computer vision, etc. I have always been interested in the field, and I also want to find a specific path of interest to start a research project. Luckily, it took place in Los Angeles again this year, so I took the Metrolink to Downtown and attended the conference!

SIGGRAPH 2019

It was such a great experience and exceeded my expectations for a conference. I like SIGGRAPH mostly for its richness in content. When I first took a glance at the schedule, I was overwhelmed by how many activities there were. For 5 days in a row, there were often 3 to 5 things happening at the same time. Below are two screenshots of the Sunday schedule. You can see they all took place starting at 9:00 am. Of course, I was only able to be at one place at a time, so I had to choose carefully which ones to attend.

     

 

 

 

 

 

 

 

 

I was totally blown away when I entered the LA convention center, and I was really impressed by this academic conference. They played super exciting music at the entrance, and it made me feel as if the center of the world was inside the convention center.

 

 

 

 

 

So what kinds of programs are included in the conference? In what ways is SIGGRAPH different from other academic events? Below is a picture of all the types of its activities. You can see some familiar ones like courses, a keynote, talks, technical papers, etc. But there are also some peculiar ones.

 

Real-Time Live was my favorite event throughout SIGGRAPH. Instead of presenting their research with slides, they showcased their technology in real time and on stage. The projects they chose were the most graphical and mind-blowing of all. For instance, a team developed a software called Gaugan that could transform the 6-year-old doodle to real-life image.

A huge area of the exhibition hall was devoted to VR. Each project was separated in a zone, and visitors would line up to give it a try. It was a bit surprising that they were using the same brand of VR headset as found at the In the Know Lab in Pomona, except some projects required the wireless ones.

VR Theater was a huge circular area enclosed by red curtains. There was a big circular screen on the wall, showing surreal star lights accompanied by music. Viewers wore a VR headset, sat in a circle, and engaged with five art pieces in a row.

Though I have tried a few VR short movies and games before, the projects at the VR Theater still expanded my understanding of storytelling in a visual format. However, I really wished there would have been some interaction between viewers. It seems that we still have a lot to explore in VR arts.

 

 

While at the conference, there were two projects which caught my attention:

For example, this cool project is an AR zombie-shooting game, developed by a Japanese team. You would wear an AR lens to see the zombies, and your back would shake a bit each time you fire or get hit. 

     

SIGGRAPH also offered hands-on workshops in its exhibition hall. They circled an area for around 30 computers. The sessions were mostly for companies to sell their software, by showing how easily you can do a cool-looking project with them.I partly followed a UE4 camera movement project, and a professor’s lecture about trouble shooting 3D printing. They were informative, for sure, but usually too long. I had too many activities to sit there for 2 hours.

All tickets came with an access to the computer animation festival. It took place in Microsoft Theater, which had 7,100 seats. On that day, the theater was full. This year, the festival had nominated works from both companies and students. They also had a huge diversity in styles, and were equally well-made. In the end, the audience were asked to vote their favorite on the mobile SIGGRAPH app. I was glad to find that the Chinese music video won the first place.

After the great show, the crowd was led to Xbox Plaza, and was fed some nice food and drink. I walked around and talked to people. Unfortunately, I was not bold enough to approach elderly-looking tech workers in the companies, so I confined my discussion with student volunteers and college researchers. It was a great view of the plaza, when it was filled with brilliant minds.

Not only was the artistic side of SIGGRAPH fantastic, but the academic side was also exciting as well. I will share my insight of the latter in the next blog.

 

By Jack Chen

Experimenting with Nanome

HPC Support Staff and Pomona College student Ekeka Abazie reviews the research he’s conducted while using the Virtual Reality environment Nanome, focusing on how the program can visualize and render molecules.

Capture of me using Nanome on the Oculus Tethered Headset

I’ve made one poster presenting my summer computational chemistry research with the chemistry department that I will be presenting on Stover Walk at Pomona College as well as at the SACNAS conference in Honolulu, Hawaii. However, I’m also interested in presenting the work that I did with Nanome while at HPC for the Pomona College summer presentations at Stover Walk. After getting permission from Jorgensen, who is coordinating the poster presentation at Stover Walk, I am now hoping to present two posters—one based on my work with Nanome and another based on my SURP research.

         It was difficult at first to decide what I would present on my poster concerning Nanome. I originally wanted to use Nanome as a research tool with the main focus being on competitive inhibition of the enzyme complex, Succinate Dehydrogenase, of the Citric Acid Cycle. I had previously worked on a similar and more experimentally focused project with resources provided by the Biology department, but this was going to be a different examination that was based more on distances and modeling. After talking to Senior Shklyar [sic], I decided against this and opted for a poster that looked more at the capabilities of Nanome; in other words, focusing more on the VR software. However, after internalizing my conversation with Senior Shklyar even more, I’ve realized that the blog posts that I write concerning Nanome are sufficient to display my work on the project. We already have 3 posters that will be representing HPC, and there will definitely be other opportunities for me to demonstrate my love and commitment to my work at HPC. Nonetheless, I still learned a lot more about Nanome by getting into it with the intent to make a presentation, so I think that deserves some discourse.

  

Normal Glutamic Acid positioning (healthy patient)

I modeled the mutation of Sickle Cell Anemia using Nanome. Sickle Cell Anemia is caused by a single point mutation at the 6th codon of the β-globin gene which causes GAG which codes for Glutamic Acid to be changed into GUG which codes for Valine. I thought this was a cool undertaking, because for me at the time, it represented a valuable use of VR in education. I’m imagining a genetics class that allows students to mutate sections of DNA and see how that would affect an organism. Imagine a whole class in VR working on mutating DNA molecules, sounds like Ender’s Game to me which is super exciting.

Mutated Valine positioning (sick patient)

I also modeled the binding of Carbon monoxide to myoglobin which causes Carbon Monoxide poisoning. Carbon Monoxide causes reduction of the central metal ion of myoglobin upon binding to Fe2+ turning the complex into Carbonmonoxymyoglobin which can’t perform the normal respiratory functions of the standard oxymyoglobin complex and leads to respiratory difficulty and, possibly, death. Pretty cool health education application in my eyes. It is also worth mentioning the attention to detail in the visualization that Nanome provides for these molecules. It’s astounding.

Normal Oxymyoglobin Fe Complex

Lastly, I learned that Nanome could run a lot of functions which I didn’t expect, such as running a multitude of commands or accept different extensions and attachments to make it easier for the user. I find the addition of extensions and attachments to be the most promising because it seems the most boundary extending for what Nanome could be used for, similar to how games that accept mods can be used for so much more.

Aberrant Carbonmonoxymyoglobin Fe Complex

I also learned how to render unique modified molecules using Nanome. For example, I rendered a modified human Lactate Dehydrogenase where the first 20 amino acids of each chain had been replaced with the GFP (Green Fluorescence Protein) from Aequorea Victoria jellyfish. I wonder what else I thought Nanome couldn’t do that it actually can.

– By Ekeka Abazie

The “How” is Secondary

Kevin Ayala (Pomona College, Computer Science) provides an encouraging insight into his experience working with HPC Support and at the in The Know Lab. Kevin discusses the high standards involved in the projects as well as the opportunities open to minorities who do not have access to such equipment.

Kevin Ayala

I only heard about the lab because I happened to be in the right place at the right time. But as soon as I joined, I knew it was the perfect place for me to be. When Asya gave me the tour, there was a sense of wonder at seeing all the technology that was available, but also at seeing what students like me were already doing with them. It was something I could only describe with the first word that came out of my mouth: “Wow!”

And to be honest, I was a little scared as well. Everyone was already so far ahead, and it seemed like everyone else knew so much. I hadn’t grown up with access to any of this technology, and I felt like I was trying to drink water through a fire hose every time I went to a team meeting. Even now, months later, I know I still have a lot left to learn before I can delve into the industry in any professional way. But I’ve learned something else through my work in the lab, perhaps something more important.

It wasn’t something that anyone said, but rather from feeling the energy in the room at each of those meetings. It was a sense of inspiration, of confidence in our own abilities. It was the feeling of dozens of minds working together, each believing in the changes we were making and the projects we were undertaking. Most important, it was the sense that our inexperience, our lack of knowledge or talent or resources, was not a limitation. It was nothing more than a stepping stone, and we could use it to go wherever we want to go.

“The How is Secondary” became my own little phrase, both to explain the near chaos of the lab and to motivate my own self. It doesn’t mean that “How” wasn’t an important part of what we did, but it did mean that I didn’t have to let it stop me. I slowly became more confident in the idea that I didn’t have to know everything about what I was working with, not yet. I could learn along the way, and perhaps even learn better! I could create my own “wow” moments.

That was a big change for me. I’ve always loved learning, but it’s always served as a means to an end, both a way to get where I was going and a stop sign to keep me from going there until I was ‘ready’. I had always held myself back, waiting until I knew or had enough experience to do what I wanted to do. And that’s not the way the Lab works.

In fact, one of the great things about the Lab, and the reason it’s so successful is because it encourages us to experiment, but holds us to a higher standard when doing so. When I started working with Raspberry Pi’s, I knew nothing about them except that they were small computers. For that matter, I knew nothing about Linux or using Secure Shell to access another system. But I was encouraged to learn about them – to find the resources that I needed. At first, it was terrifying. To know that I might fail, might not know enough to make it work, made me wary of starting my own project. But with each little breakthrough, I grew more confident and more excited. And just last week, I watched in wonder as the 3D printer finally responded to my click on the laptop touchpad.

That little bit of movement was such a relief. I hadn’t let myself believe that I’d done it until I saw it for myself, but man, it felt so good to know that I had done that, from scratch and by learning on the go. I think that bit of confidence that the Lab gives is sometimes underestimated.

One of my favorite things about the Lab, and something I know Asya works very hard on, is the atmosphere of diversity and inclusivity. Part of that is knowing that minorities often don’t have the confidence to work in the field, because we’ve never had the chance to work with or, even be around, these types of technology. There’s a certain feeling of anxiety, worrying that you’ll break or damage something too expensive for you to even look at. But I think that the reason we’ve been so successful in this endeavor is the implicit trust given to us by the idea that we can learn what we need to along the way. That trust inspires confidence and that confidence, in turn, inspires our ideas and our learning to flourish. And that is something I’m profoundly thankful for.

By Kevin Ayala

Impressions of Nanome Curie

Ekeka Abazie uses Nanome to see molecules in a Virtual Reality environment. Here, he discusses his impressions of using Nanome Curie.

Right off the bat, I noticed a reduction in quality from Nanome when using the Oculus S tethered headset and Nanome Curie using the Oculus Quest headset. The best way I can compare it is akin to that of the original version and the lite version where you would find on a lower-tier console, which leads me to question whether it is the platform that is partially to blame.

The graphics quality on Nanome Curie was very pixelated and blocky, making analysis very difficult. Further, there were slow reaction times that added vibrations, making the careful analysis I was able to perform using Nanome on Oculus S challenging. There were also significant delays, and at times, the molecule would be moved very far into the screen which necessitated moving it to myself like I was fishing.

I wasn’t even able to analyze the same molecules as I was with the Oculus S, because many wouldn’t even load properly. For example, when I tried to load graphene from the “Featured” menu, a text box would appear with a green checkmark indicating that the molecule had loaded, and then I would get an error message. For others, that I searched using the database, I would often just start with the error message.

I also noticed that I couldn’t just focus on one molecule and enlarge or work with it. Any changes in size that I made to one molecule would be carried to the other molecule, which, yes, keeps everything to scale, but I imagined that selecting a molecule would allow me to make specific edits to that molecule rather than the entire workspace. This was particularly annoying when I had a 2HBS, a rather large molecule that takes up a lot of space, within the same workspace as a small hexacarbonyl ring. I would have to move 2HBS quite far if I wanted to enlarge the hexacarbonyl and focus on it.

I spoke with Professor O’Leary in the Chemistry Department at Pomona College (I originally came to inquire about any past assignments I could use to test the VR software) about the feasibility of VR for molecule visualization compared to already dominant software in the market like SPARTAN, Chemdraw, or GAUSSIAN. He told me that he didn’t see a need for VR in the market because the present software seemed much more suited for the task and easier to work with. However, he said that he could see how VR would aid in the comprehension of molecule manipulation and in education. He said this would especially be the case if you could input specific files for molecules using a variety of different file extensions. This is something I mentioned in my review of Nanome using Oculus S tethered, and I easily understood how I could upload files for molecules of interest and use it to work with them on that platform. However, on Nanome Curie using the non-tethered Oculus Quest headset, I don’t see how this same file upload process would work.

By Ekeka Abazie

Modeling Electron Transfer on Graphene Sheets

Photo accessed from https://www.understandingnano.com/graphene-properties.html on  July 9, 2019

While the rest of my research group is invested in mentoring the PAYS student program at Pomona College, I have continued onto a different project concerning graphene lattices and electron transfer across carbon atoms along their covalent bonds. My previous research project was concerned with square lattices and the journey of reactants to different traps along the lattice (<n> walk length). Similarly, my current project is primarily concerned with the transfer of electrons across hexagonal lattices and calculating the probable walk lengths to a specific centrosymmetric site on the lattice. This would have applications in superconductors which graphite is known to have similar behaviors, as well as in the folding patterns of metalloproteins such as cytochromes and plastocyanin.

Based on previous literature, we decided to confine the hexagons that make up a graphene lattice onto triangular lattices made up of smaller triangles. The center of each triangle was made into a site (i.e. Carbon atom) which was then attached to the site directly adjacent to it that modeled the bonds (i.e. covalent bonds). This diagram made a neat pattern of quickly distinguishable hexagons across the triangle. Calculations were then done using a matrix system of equations and instituting different boundary conditions. It should also be mentioned that there were numerous different methods of triangular lattices that were experimented on, and we’re still looking at ways to connect the different models. For example, we did calculations using Sierpinski gaskets which had no triangles in the center surrounding the centrosymmetric site, and we did calculations on a “triangular lattice” which maximized the number of triangles and had 6 bonds connecting to the centrosymmetric site.

Recently, we have begun considering stacking layers of these lattices on top of each other to model graphite, which is a 3-dimensional structure composed of layers of graphene. 3D Paint has been a useful tool in creating the diagrams to model this, but we are still considering what would be reasonable boundary conditions to work with. We have also partnered with HPC to get access to more resources and programs that can aid in this component of our research. I will hopefully be included in this next venture which could open many new doors in our exploration of graphene.

By Ekeka Abazie

 

 

Creating an Educational Curriculum with Tello Drones

Amin Nash discusses his journey in creating a Tello Drone curriculum.

STEM education has taken a sharp rise in academic interest during the past half-dozen years. New approaches in educating younger students on the application of STEM has brought an innovative use of “simple” technologies. This can be seen by DJI’s Ryze Tello drone, which appears to be a “simple toy” for the younger generation, but is actually a highly effective tool in educating students about programming, mathematics, and creativity.

I was given the task by Asya Shklyar to organize an easy-to-use curriculum for both young and older students who want to interact with the In The Know Lab’s Tello drones. This was a challenge for me, because going into this project, I did not know anything about flying drones or being able to program them. In order to balance out my deficiencies, I spent approximately two weeks researching how the DJI’s technology works, how the Tello drone uses its wireless technology, and if there is any way to make the drone think on its own. This research led me to two options: One is to use DroneBlocks, and the other is to program through Python. Since my initial plan was to create a curriculum that addresses students from underrepresented and minority communities who don’t have access to larger programming education, I decided to use DroneBlocks as my form of programming.

I know, DroneBlocks is “too easy” and “too childish”.

However, DroneBlocks is an easy-to-use UI that allows a user to drag-and-drop pre-made scripts onto the screen. For example, students could drag “Takeoff”, “Fly Forward”, and “Land” without actually having to type in the code. For me, the advantage I saw in this easy-to-use system was not in its efficiency, but in its visualization and interface. Programming is a very sequential process, almost like writing a story, where there is a command in the beginning, a series of commands that make up the body, and a (hopefully) satisfying ending. By being able to visually see the commands of “Takeoff” and “Land” allows for a cognitive understanding of how a program should flow, and what should go in between these commands as a whole. Thus, when thinking of communities who’ve never programmed before, I found it more important to have them realize the overall sequential logic behind programming.

DroneBlocks Interface

Once I pinpointed which programming interface I wanted to use for the Tello, I began looking into GitHub and online resources to start developing projects that can be consolidated into a curriculum. The two most useful tools I found were the actual DroneBlocks website, Mr. Baldwin’s GitHub, and the SheMaps course that was referred to me by Asya. These three resources allowed me to find practical “missions” that could be made, as Mr. Baldwin provided an organized instruction on how to use mathematics and creativity behind the DroneBlocks, while the SheMaps course provided instruction in measuring the world and applying critical thinking into these measurements. I combined the two ideas to make a comprehensive lesson plan that aimed to use the Tello drone’s programming in order to transcend its appearance as a “toy” – it became a valuable tool for mathematics, programming, and creativity.

A sample script

The lessons I developed were simple: Land the drone on a table, fly through obstacle courses, program the drone to think on its own through Loops, and finally, fly a X-Y slope.

The curriculum came out to be approximately 25-30 pages and took me nearly 3 weeks to finish. I was able to demonstrate the curriculum with a number of my coworkers while also using the curriculum with a number of students around the Pomona community.

One such community was the City of Knowledge school in Pomona, ten minutes away from Pomona College. A private Islamic school, the school is reinvigorating their STEM program and is opening its doors to new approaches to the STEM field. Myself and David D’Attile conducted an hour-and-a-half lesson on drones for about 10-11 students, and they all were extremely excited to use the drone for something more than a toy. They found joy in being able to actually tell the drone what to do and where to go.

During this lesson, what I found most impressive was the students ability to actually figure out the sequential logic behind programming. They understood the process to command the drone in an appropriate way, they used critical thinking skills to fix their mistakes, and they landed the drone multiple times in an appropriate spot.

I found that the Tello drone is an extremely easy tool to use when wanting educate students on the fundamentals and basics of STEM, but it is even more useful in that it allows for extremely quick learning methods. Students who tend to be slow learners or are quick to be “sidetracked” are able to keep focused with the drone and can learn with a lot of joy through its capabilities. Most importantly, I found that teaching STEM is not necessarily a challenging idea, and in fact, is a very enjoyable activity.

By Amin Nash

Drone Unboxing

On Tuesday, February 12, Asya excitedly escorted me, Sabina, and Ino into Project Room C. Inside, we found a few new additions to the lab; a DJI Mavik 2 Pro, a Tello EDU mini-drone, and accessory/starter kits for both. 

The Mavik 2 Pro box included the drone itself,  extra propellers, one battery pack/charger, a controller, and the gimbal-mounted Hasselblad L1D-20c camera. The drone is easily manageable in a single hand and weighs 2 pounds with the battery and propellers installed; a testament to DJI’s engineering. The controller is fully folded and flat at the time of unboxing, but when unfolded, two antennas, a phone-holding mechanism, and cleverly removable joysticks adorn the unit (it should be noted that only Micro-USB or Lightning enable phones are supported out of the box, a USB-C adapter is available for $10 from DJI). When connected, a phone will display a live feed straight from the Mavik’s mounted camera to assist in flying at extended range.  

As for Mavik accessories, we received the DJI Fly More kit with extra components and FPV goggles directly from DJI. The kit included 4 extra propellers, 2 extra batteries, a charging extension capable of docking four batteries, a car-charger cable, and a battery to USB x2 adapter for charging the controller or any USB accessory with the battery packs, and a backpack designed to carry all the items. Importantly, the charging hub can only charge one battery at a time rather than all four simultaneously. There are multiple ways to pack the backpack efficiently with equally many guides available through YouTube or blog posts (like this one). 

The Tello EDU box contained the drone itself, 4 removable propellers, 4 removable prop guards, a prop removal tool, a battery/charging cable, and 4 double-sided “mission pads” that a user can train to utilize in different ways. The drone looks like a smaller, cuter cousin of the Mavik 2 Pro by weighing in at less than three ounces and 4 inches across. Oddly, the box did not include a method for charging the drone besides placing the battery in the Tello and charging through the drone’s side-mounted Micro-USB port. The box did not include a controller since an Android or iOS device can connect to and pilot the drone over WiFi using the Tello or Tello EDU app. Alternatively, the drone can receive commands from desktop Chrome devices using the Droneblocks Chrome Web App.  

In addition to the components included in Tello’s box, HPC received 4 extra prop guards, 4 extra propellers, 2 extra batteries, and a battery charging hub that can house 3 individual batteries. Similarly to the Mavik’s battery hub, the Tello Hub can only charge one battery at a time.  

These two drones will have vastly different purposes within Pomona’s HPC department. The Mavik is an enthusiast-level drone capable of 30 minutes of flight time and 18KM of range on a single charge (under optimal, zero-wind conditions, of course). The Mavik’s previously described camera technology alongside its impressive flight capabilities combine to create a drone fit for purposes including footage capture, data modeling, and remote exploration. This drone’s enhanced features and extended range warrant trained pilots and communication between Pomona College and local airports prior to flying, which HPC is currently looking into. 

Contrastingly, the Tello EDU drone is a cheap mini-quadcopter designed to enable drone flight to anyone while simultaneously teaching the fundamentals of drone programming. The Tello lasts just under ten minutes on a charge, and its size/limited controller range prevent the drone from flying outdoors. Even though Tello’s capabilities are limited, the combination of easy-to-use apps and a basic SDK make it a perfect drone for inexperienced or simply curious users. Tello can receive commands issued over a Wi-Fi UDP server in Python, Swift, and Scratch, which individuals can use to program their own controllers or predetermined routines. Additionally, more advanced programmers can craft swarming programs that enable multiple Tellos to “think” as one or software that trains the Tello’s proximity sensor and camera to recognize objects such as people or animals through machine learning. 

Both drones present their own unique use cases, and myself alongside HPC looks forward to experimenting with and mastering these miniature flying marvels. 

 

By David D’Attile

An English Grad Student’s First Semester Experience in HPC Staff

Amin Nash

I was probably like most students around the Claremont Colleges. I wanted on-campus research opportunities, began exploring Handshake for some jobs, and eventually came across the HPC Support Staff position in Pomona College. Though the job was under Pomona’s ITS department, I was attracted by the chance to help write reports deriving from various completed projects. I figured it was a chance for me to mix my major in the Humanities with the field of technology. I spoke briefly with Asya, was invited to the first meeting, and what ended up happening – which I hope is similar to most students around the Claremont Colleges – was the sheer overwhelming feeling of profound information that left me slightly anxious in confusion, but immensely excited for the possibilities.

A little bit about myself:  Before coming to Claremont Graduate University, I worked about four to five years in the business world of web startups and hospitality companies, using my education to write content, policies, and business plans. I also made extra cash working as a bouncer at various Hollywood nightclubs (I’m not your plug). Though I was paid well and the work was consistent, I was always left feeling unfulfilled and wanted to contribute more to society. I wanted to work harder, to grind, to struggle. So I enrolled into the Master’s of Arts program in English at CGU, hoping to finish by 2020 and to attain concrete knowledge to help prove, adapt, and innovate my thoughts into society. Specifically, I chose Claremont because of its approach in trans-disciplinary and interdisciplinary studies: I could adapt my knowledge, my skills, and my goals into fields outside of my own.

Why English? Because I love it, that’s why.

So it goes without question that my initial experiences with HPC was met with a lot of tension, mainly from myself. During the first meeting, Asya threw out terms like “NLP”, “GIS”, “GitHub”, “Neuroscience networks and cognitive recognition reflective for sentiment analysis” (like, what the hell, man). I immediately regretted my 4-5 years of my past business work, as I realized how much I have missed in life, and how fast things change. I did not know people could share their projects into repositories (GitHub), and I did not know that C++ is becoming an “old language” as Python makes its way into modernity.

There was so much to learn! So much I missed! How would I keep up? Why am I an English major??

Then I realized how old I am. The immense speed of advancements made me realize that my experiences growing up far differed from the experiences of Pomona students. I had to admit to myself that even though I’m an “older” student, the students in Pomona will always know more than me when it comes to the current state of technology.

I really had to swallow my anxieties and my discomfort. I had to admit to myself that I’m older, that I do not know a lot about technology, and because things have changed so fast, I wasn’t able to make it seem like I knew what I was talking about.

So, My solution was to ask a lot questions. “How do I learn Python? What are projects you think I can contribute to? Can I assist with any writing or management projects?”

Asya was very open minded and honest about my position. She knew that with me being a Humanities major, it would be hard to actively “work” in technology and sciences, but that didn’t stop us from finding out how to contribute. Since I have experience in websites and blogs, Asya allowed me to use these skills to work with the research blog and assist in pumping out student blog entries. Connectivity and interaction between audience and product is essential in English, and being able to articulate the students’ thoughts was a valuable skill that kept me working. We found that being “older” and being a Humanities major isn’t a bad thing, in fact, was somewhat beneficial.

From there, I began contributing more to the In the Know Lab, working with other students to learn the equipment as well as make it approachable for new students. Most prominently, I was able to attach myself to the Digital Innovations Text Lab with fellow student Jack Weber and ITS Business Analyst Kristina Khederlarian, and that allowed me actively engage with a research project throughout the semester. Through this project, I learned the details on using Python’s ability to recall various corpora and large bodies of texts, tokenize certain sentiments and terms from the texts, and numerating them to find patterns within human logic – specifically for trustworthiness.

Things turned out alright for this “old English major.”

Just like every student, I got busy with final papers and research projects of my own. It was challenging to mix the job with research projects, but Asya made it easy to contribute while also dealing with an immense course load. I was very grateful for that.

If there’s any advice I could give to fellow students in the struggle, it would be to embrace it and to do your best. Don’t feel overwhelmed and don’t feel “out of place”. There is a place for you and it just takes some effort to really begin to contribute. Take the lessons you learned, put them in your own words, and see if you can continue to make the world a better place!

By Amin Nash

Pomona HPC Members Visit SpaceX in Hawthorne

Four Members of HPC Support Staff Took a Day Trip to SpaceX

Amin (CGU – English), Chris (Pomona – Economics), Nicole (Pomona – Computer Science), Lindsey (Pomona – Mathematics)

On April 4, 2019, HPC Director Asya Shklyar took four members of the HPC Support Staff to visit SpaceX in Hawthorne, Los Angeles. In order to beat the notorious South LA traffic, Asya arranged for an afternoon meeting in Hermosa Beach where she treated the four staffers – Chris Nardi (Pomona, Economics), Lindsey Tam (Pomona, Mathematics), Nicole Talisay (Pomona, Computer Science), and Amin Nash (CGU, English) – to a few hours on the beach and a very delicious dinner at Abigaile. Students enjoyed healthy meals of roasted salmon, mushroom meatballs, and scallops with beautiful views of the ocean.

After dinner, the students were off to Hawthorne to visit SpaceX. Asya introduced the students to Jesse Keller, who worked in one of the managing departments of SpaceX and was very enthusiastic to show the students around the campus. Jesse gave an overview of SpaceX from its first rocket launch in 2010 to its current state of producing more than three (3) rockets a month, pointing out that the primary focus of SpaceX was to always push their mission forward and think big. The students got to see the different engines being manufactured and the immense attention to details each engine contained – from the exhausts to the bearings. The students also got to engage with 3D printers that printed in metal (fun fact: In The Know Lab uses a smaller printer that uses resin and filament instead of metal), all while watching engineers work on the cockpits of future space shuttles.

Amin (CGU), Nicole (Pomona), Lindsey (Pomona), Chris (Pomona)

One of the biggest takeaways from the trip was Jesse’s explanation of how SpaceX was committed to making the technology that makes the rockets, and not exactly create the rockets themselves. He compared this to a similar model for Tesla, where they were interested in a way to produce more electric cars instead of changing the technology of the vehicle. With this in mind, Jesse showed the students the various technologies that helps engineers and mechanics produce the various parts of the rocket. The end of each production level can be compared to making a Lego rocket, where each individual piece can be “attached” together easily and efficiently. The most important thing is that each part works extremely well!

The trip, as a whole, was fun and engaging. The students got to see the various real-world implications that goes into planning, inventing, and innovating a space craft, as well as the administrative and management side of the entire process. Various equipment – such as the 3D Printer and other technologies – are available in Pomona at an educational level, and they are actively used in a bigger platform such as SpaceX. All in all, the students got to engage and connect with one another in Hawthorne while also learning about a large-scale operation with big plans for the future.

Some fun on the beach 🙂

 

 

 

 

 

 

Virtual Reality Tools at HPC Support Team’s Weekly Meeting

During HPC’s weekly Friday meeting, Director of HPC Asya Shklyar transformed the entire meeting room into an interactive project space for HPC Support members. “Let’s move the chairs to make room,” Asya said, “that way we can use the board to document all the programs associated with different fields of study in VR.” The students began moving chairs and collaboratively working together to get the space to function. Soon, the meeting room became its own VR work space.

Configuring Alienware laptops
Students interacting with Oculus Go, a wireless VR headset.


Students were able to actively learn how to connect all the equipment properly and how to begin engaging with every individual program. Asya then requested the students to organize the programs by their specific fields of study, ranging from Biology to Economics to Linguistics.  The idea was to curate various VR experiences  in order to prepare for demonstration to faculty and students. Throughout the process, students learned how to troubleshoot individual issues with the equipment while also engaging with the programs that are associated with various fields of study.

Engaging with the VR experience.


Through the interaction, students had their questions answered about how to link the VR equipment properly and how to run the programs. In the end, it was a highly engaging Friday that saw the participation of multiple students and the engagement from those who’ve never used VR before (like me).


Ekeke setting up HTC Vive.
Asya overseeing the process.