PAULA POKORNA
Glasgow School of Art
Interaction Design BA (Hons)
Y4 Studio Work and Research
Portfolio2024
DEVELOPMENT
To view my experimentation leading towards the work under this section, please visit my Experimentation section
Setting up Firebase and p5.js sketch:
STEP01: IN PROJECT, TERMINAL- Installing Firebase: npm init -y
this creates a package.json
TERMINAL: npm i firebase
gives you access to the firebase library
STEP02: create folder called ‘src’ and put: index.js , index.html, libraries inside
STEP03: initialize - app, - database, ref, set, remove
configuration - find in project settings on Firebase web
STEP04: to make the browser able to read the p5 sketch inside a module, it is necessary to hack the index.js with:
window.setup = setup;
window.draw = draw;
and index.html with:
<script type="module" src="index.js"></script
at the end of the head
(The proper way of making it work would be by instancing the setup() and draw() plus adding p5. in front of every p5 function. In this case the p5 sketch could be separated from the initializing and configuration firebase code while being able to import objects from one code to another within the project by :
import { classname } from <src>
export { };
in this case, no html editing is necessary)
for the next steps of incorporating the web drawing I had to follow the previously applied code and appropriate it with the code provided by Daniel Shiffman’s one from the project in the video below. The code provided by Shiffman is outdated and uses an old firebase library that has changed already. Therefore, Iit was necessery for me to undertand how modules, and the new firebase library works (as shown in my previous steps).
STEP05: setup() and draw()
TO DO : annotate
Nested loops explained
RETRIEVING DATA
With retrieving the data from the Firebase Realtime Database, I encountered issues regarding accessing the nested nodes via the firebase key, which I tried keeping due to possibilities of further drawing animation in case I wished to use that process. Tutors helped me to access these data by:
DEPLOYING VIA GITHUB PAGES
UNEXPECTED ISSUE01:
I found out that the firebase did not have issues with disabling the database as well as slowing down the drawing response when page was reloaded. Therefore, after a discussing this issue with my tutors and their help, I tweaked the code in order to reload the page every time a drawing is saved. This however reflected on the user interaction experience (reloading a page when pressing a button and loosing the visual track of your drawing for z second is not ideal). However, the technical issue seemed to be hacked by this solution.
INTERFACE DESIGN
While aiming to preserve certain visual aspects of my previous analogue scroll game, I wished to make the interface least invasive. At the beginning I also wanted to incorporate a paint colour picker, which I later skipped.
I
created 3 versions of simple design ideas in Figma:
I ended up choosing the second design but skipping the colour picker after thinking about the use of my real life video capture assignments from my experimentation. The colours would sometimes not be seen vividly and thus I decided to use a bright yellow stroke for current drawing (mimicking captions on video and referencing the narrative creation) while using white stroke for saved drawing (also referencing subtitle design)
CONNECTING THE PHYSICAL WITH THE DIGITAL:
Since I understood that the realtime Database can be integrated into Unity, I kept thinking about merging the two realities into one, communicating the human recreation of the fictional on top of the physical reality. For quite some time, I had trouble finding the right way of connecting these ideas effortlessly. However, few weeks before the Work in Progress show, I decided to create a scrolling video background for the web drawing app to enable the user to change the narrative of the realities captured on the videos as well as drawings of others’ .
The videos gathered can be viewed in my Experimentation section
The final WIP web app:
While testing on Illyama touchScreen monitor using a Minix machine for installation purposes, the touch screen showed significant delay when drawing a line on the canvas even after our hack with reloading the page. Therefore, I ended up testing on Mac mini, which did not show any of these issues.
However, using the Mac Mini meant facing different problems such as disabling the remapping of x and y position of the mouse when monitor set to portrait mode. This resulted the centre of the canvas being the only point where the users’ touch related to th eposition of current drawing. Therefore, I needed to come up with an alternative solution and compromise either the visual presentation or the responsivness of the web.
Since the idea of the monitor to be set vertically represented an integral part of an accesible digital device interpretation, I did not want to compromise on that factor. Neither the functionality and thus I needed to rearrange the webs’ position to appear to be horizontal yet vertical, when the monitor would be set up vertically.
Solution:
THE WORK IN PROGRESS SHOW:
I decided to wrap the cable enabling the touch to be recognised by the monitor into a yellow insulation tape, connecting extending the process of the drawing into the physical space of the installation. On the other hand, using other neccessary cables in white representing the “given”
Getting all cables very long allowed me to play around with the idea of extending the draing outside of the screen and made me go crazy with their positioning on the wall.
The creative fiction and idealization of the virtual space is portrayed via the naive and child-like use of cable presentation. In reality, the cables enable the data from the collective drawing to be stored in the Firebase Database.
WORK DESCRIPTION:
PAULA POKORNA
FREE LUNCH
DIMENSIONS VARIABLE
INTERACTIVE WEB APPLICATION, DAS, VIDEO
Free Lunch' is a reinterpretation of a creative common space showcasing concepts of collectivist norms derived from open-source online cultures. The drawing web application examines the users' domination over the digital canvas and their tendency to leave their illustrations' narrative open-ended for future repurposing of their materials. The work challenges the tangible and real alongside the 'digital commons' while manifesting contemporary hopes and anxieties about their futures.
Reflection:
I did not enjoy the cabling craziness after 30minutes of having it installed. This made me realise that this needs to be firstly experimented with and rethought way before. Since not having experience with clipping cables onto the wall, I was not able to predict the behaviour of the material and its visual outcomes. This move should have been revised way before the installation day. Additionally, after about an hour of the show, my database started crashing again, not saving any data anymore. The hack of reloading the page did not seem to work properly and so I needed to manually reupdate the database links on GitHub every now and then and re-host the website. Ending up with about a database per day of the show. - Not ideal
RETHINKING THE POSSIBLE FINAL OUTCOMES
After the WIP show, having some time off and fully engaged with working on my dissertation, I ended up being quite frustrated with rethinking the concept of the reality -we -simulation/unity transcendence in a more effective way. I knew I wanted to steb back from the drawing purpose of the work, rather working with real time caera view and not deliberately pointing out the fictionized realities, but extending and mirroring realities instead. I had a clear concept in mind, knowing what I want to communicate, the issue was to find a visual way of achieving it while integrating my technical developments. At that point I realized that working with social topic and philosophy is much harder to contextulize in a scope of a single outcome.
GENERATING MESH IN UNITY - PRACTICE
Step 1: Create an empty game object
Step2: Add component New Script
Step 3: Generate Mesh by:
step4: Add trangles and verteces by:
step5: Recalculate all normals and bounds in void Start();
by: Mesh.RecalculateNormals();
Mesh.RecalculateBounds();
//Bounds: boundering box which contains all vertices within the cube
//Normals: Vectors on sides of the vertices used by the rendering engine to calculate light reflections
step6: Add Mesh renderer component and select material
step7: add 4 more faces of the cube by:
ANIMATING THE CUBE:
Step7: in public class:
public float UpDownFactor = 0.1f;
public float UpDownSpeed = 6f;
Step8: in void update():
Mesh.vertices = GenerateVerts(Mathf.Sin(Time.realtimeSinceStartup * UpDownSpeed) * UpDownFactor);
- the value of GenerateVerts() in this case can vary depending on animation
Step9: set default of: private Vector3[ ] GenerateVerts(float up = 0f)
Step10: add: ’ + up ‘ to every top vertex value of Vector3 ( in this case add after every value 2 so: new Vector3(-1, 2 + up, 1),)...
Result:
CONNECTING FIREBASE REALTIME DATABASE DATABASE WITH UNITY:
For the next step, to be able to integrate my collected data and to be able to visualise the, it is necessary to connect the firebase DaS , with Unity and thus properly read through the Firebase Unity documentation:
https://firebase.google.com/docs/database/unity/retrieve-data
First it is necessary to download the SDK and install the firebase library to unity properly. I faced some issues with Unity accessing the libraries on Mac and the link below debugged the issue that had to to with default security and privacy settings of the Mac.
Play Mode Runtime Errors:
If your game starts, but runs into issues with Firebase while running, try the following:
Ensure that you approve Firebase bundles in "Security & Privacy" on Mac OS
If, on starting up your game in the editor on Mac OS, you are presented a dialogue that says, "FirebaseCppApp-<version>.bundle Cannot be opened because the developer cannot be verified.", you must approve that specific bundle file in Mac's Security & Privacy menu.
To do so, click Apple Icon > System Preferences > Security & Privacy
In the security menu, about halfway down the page, there is a section that says ""FirebaseCppApp-<version>.bundle" was blocked from use because it is not from an identified developer."
Click the button labeled Allow Anyway.
With the help of the tutorial above, I was able to to print the data in my unity scene, working realtime when data is changed.
To be able to refer to a specific database under a firebase project it is neccessary to change the directory of the Database reference from DefaultInstance to GetInstance method :
void Start
{
reference = FirebaseDatabase.DefaultInstance.RootReference;
FirebaseDatabase.DefaultInstance.GetReference("counter").ValueChanged += HandleUpdateScore;
}
// default instance automatically connects to the default database of the project and does not need to be hyperlinked such as the GetInstance method:
void Start
{
reference = FirebaseDatabase.GetInstance("https://unitytest-f541e.europe-west1.firebasedatabase.app/").RootReference; FirebaseDatabase.GetInstance("https://unitytest-f541e.europe-west1.firebasedatabase.app/").GetReference("counter").ValueChanged += HandleUpdateScore; //each time the value is changed on the node, this function is called
}
In this case this code below, provided by the tutorial above does handle the connection well and thus is a good starting point for my next process.
After this worked I decided to make the button work properly and thus to test how fast the database works. For this I created two buttons each incrementing the value of their own node in the database:
1.
public Text scoreText1;
public Text scoreText2;
// Start is called before the first frame update
public Button incrementButton1; // Reference to the button object in your scene
public Button incrementButton2; // Reference to the button object in your scene
private string counter1 = "counter1"; // Path to the first counter in the database
private string counter2 = "counter2"; // Path to the second counter in the database
2. in void Start() {
FirebaseDatabase.GetInstance("URL").GetReference("counter1").ValueChanged += HandleUpdateScore1; //each time the value is changed on the node, this function is called
FirebaseDatabase.GetInstance("URL").GetReference("counter2").ValueChanged += HandleUpdateScore2; //each time the value is changed on the node, this function is called
incrementButton1.onClick.AddListener(IncrementScore1);
incrementButton2.onClick.AddListener(IncrementScore2);
}
3. I also duplicated the functions HandleUpdateScore and IncrementScore, what does not make a big issue for now since it’s only two counters. However if used for a larger amount of counters the script needs to be optimised.
4. next I have created two scripts, where each is retrieving data of either counter1 or 2 and attached them to an empty game object called Location. This creates two public floats from the realtime data of the counter1 and counter2 node of my database.
MESH GENERATION:
//1. create array of - vertices ! each vertex has 3 points !
// 2. create an array of triangles
//3. create a mesh object
//4. add the mesh to a mesh filter
// 5. create shape
//6. in the shape function first define few verices and later triangles
//7. create a function that uses the vert and triang. data for the mesh generation - Updatemesh()
add global:
[RequireComponent(typeof(MeshFilter))] //attribute to make sure theres always a mesh filter on the same object as the script
Mesh attributes:
Vertices:
Triangles:
Unity reads vertices in clockwise order:
After some practice and understanding the Firebase capabilities weithin the Unity, I started thinking about my adjacent works from previous years. Last year, working with real-time camera capture and RGB data to generate audio enabled me to have that way of data handling ready to use. The only thing was to translate the java from processing into javaScript using p5.js and hosting it on GitHub.
Additionally, I was sure that I wanted to use the gathered videos, however I could not decide if to have them as the data inputs or contextulise them differently. Furthermore, having the Firebase well documented alongside using my Design Domain ideas of form/ terrain shaping unity game enabled me to act quickly and puzzle up the functionalities in a way that communicates the importance of collectivsm, human impact on the digital space and the transcendence of the reality bias.
Adjacent Works:
Idea Development:
I ended up preffering the annotated installation sketch of the last scan and thus, started developing the technical functions of the project. The Idea was based on having multiple screens running a unity scene with multiple game views while having embedded websites of gathered video works playing in the background (instead of a sky), affecting the long, horizontal land (spanning across all the screens) in real-time via the GM Mesh Deforming asset.
An alternative I found on social media could be based on following tutorial, using Houdini (which I’ve never used before so I preffered to stick to the plan):
I continued with hosting the videos along with the RGB scanning code on Github, resulting in issues with audio autoplay policies. Therefore, I needed to disable the automatic audio:
Firebase and RGB data reading:
console.log(int(redAverage), int(greenAverage), int(blueAverage)); let data = ref(firebaseDatabase);
set(data, { red: int(redAverage), green: int(greenAverage), blue: int(blueAverage), });
Issue with github deployment:
video is not loaded and thus the sketch and its access needs a review even though everything is working via the local host. The maximum weight of a file uploaded to github repository is 25MB and this specific video was 23. This might have caused problems because it might mean the whole branch is supposed to be under 25MB?
I’ve been able to deploy the live webcam capture sketch without any issues
+ Printing out available cameras:
137274387454238457013945701 hours later I finally decided to compress the videos even more:
https://paulapokorna.github.io/vidko1/
and of course it worked (:
This export is 14,7 MB and even though all of the compressed videos are pixelated I don’t find it being any issue and even highlighting the process of the data gathering. Thus, I decided not to resolve the video quality issue in this case. In different situations I would be thinking of getting an url embed from vimeo/youtube and reading data on top of that. This however might need some time for developement as well since the rgb reading javascript directly accesses the video file and not the “screen”.
https://github.com/paulapokorna/vidko1
https://paulapokorna.github.io/vidko1/
Quick video compress and pixelation but Hand Brake is better:
https://www.veed.io/edit/73deadbd-84c2-47eb-8996-7e5c9ba6cbfa/media
Pixelated outcomes:
Compressing a video size of cca 400MB into cca 15MB using HandBreak app leaves compression artefacts of distortion on the video outcomes.
Continuing on trying out the embedding of these websites into the scene by following:
https://github.com/gree/unity-webview?fbclid=IwAR0KapXeuMl1I4S389FGjbfWxkqKIHFfaL9OQf7ZkCTC0Kcn27uWqPtMqeA
Managing to end up with this outcome during runtime:
As the embedding part seemed to be causing too many issues, I decided to skip that part and use my videos as materials for the terrain cubes instead. Using the live camera capture RGB analyser to control an empty hame object which spawns pebbles causing impact and deformation of the terrain upon collision.
Visualisation:
Set up for degree show:
Neccessary Asset by Matt Gray:
In order to manage the positions of the empty game object, chat GPT helped me to generate these two scripts, with my editing, to be able to move the spawning position in real-time:
XYZ LIVE DATA SCRIPT:
! Line 44 red value division and 72 blue value division need to be adjusted based on the lighting conditions of the venue in order to callibrate the spawning hovering into the desired game view!
In studio day conditions:
Displ1: redV/9, blueV/ -3
Displ2: redV/9, blueV/ -1
Displ3: redV/9, blueV/ -40
Displ4: redV/9, blueV/ 9
Displ5: redV/10, blueV/ 4
+
TRANSFORM POSITION SCRIPT:
RESULT:
EDITING PLAYER SHOOT FROM GM MESH DEFORMER
I wanted to make the shooting automatic and not reliable on mouse pressed. With the help pf chatGPT I got an automised script with publicy adjustable shooting interval.
In order to avoid craters to form on the landscape, I decided to edit the code in way that it shoots every shootInterval() seconds automatically but only when the value of red
in my firebase was higher than 160, or value of green higher than 145, or value of blue higher than 145
The collected data helped me to determine the approximate deviations of each RGB , in situations without any object presented as well as the when colourful objects presented in front of the camera.
Shootiing edit2:
- added minimum trigger values which can be adjusted when camerra installed in the venue (this avoids generation of spawning objects when camera is not faced with any R, G, orB value increase:
- made the shotPower to calculate avarage RGBnumber /100 to add more diversity and thus avoid cratering on one location and using this instead of Y axis. After testing, it was clear that the height of the affected empty game object needs to be stable in order to achieve desired effect.
These edits did not seem to help much with avoiding the cratering on one location and therefore, I changed the shape of the pebbles, increased their bounciness and tweaked their mesh colliders. This solution helped me to achieve the desired effect of terrain mesh deformation.
Editing the Scene Setup:
In order to achieve the sky to continue its patterns onto th emonitors positioned next to each other, I needed to remove the skybox and attach a plane with the sky material behind the cubic terrain.
FINAL SETUP:
Pebbles with intentional reflective material forming the digital commons upon human impact:
Multi Display setUp:
Hardware:
6x 22’ LG Monitor
6x monitor wall mounts
6x kettle cable
6x HDMI into 1 adaprer/splitter: Pluggable = 4 ports
1x HDMI adapter 1 into 2
1x Webcam attached onto vertical monitor
1x long USB to USBC adapter for WebCam - Comp
3x Tablets + power cables(when using variety of data sources/surroundings from different parts of venue)
3x Tablet wall mounts
6x TV cables
1x Internet access
7 power points neccessary for installation of 6 monitors
+ 3x/(whatever number of tablets used) power points for powering tablets
For Final Work summary go to Final Work section