I'm Charles B, a Software Engineer from Barranquilla, Colombia, with a......
I’m Charles B, a Software Engineer from Barranquilla, Colombia, with a master’s degree in Project Management and have experience building startups, creating software products, and leading engineering and data teams. I have been coding for over 13 years and want to share my day at Ideaware.
What do you do first thing in the morning?
I wake up around 7 am, go to the kitchen, drink a glass of water, and start preparing coffee with cinnamon. Then, I shower, put on comfy clothes, and am ready to go to my office room.
How do you begin your working day?
Not every day is the same, but generally, I join a 30-min quick preparation call where I sync with other leaders about different non-engineering projects’ status and discuss the previous day’s results and the priorities for the ongoing day. I check my messages: Slack and email (even though we rarely use them to communicate), upcoming meetings, and review our sprint management platform to have an overview of the pending tasks and priorities of the current sprint. I have sessions with other senior leaders, which sometimes includes the company CEO, twice a week to chat about engineering progress and align recent sprint execution with business priorities and expectations.
What are the best things about Ideaware?
Ideaware has clients with exciting projects, technologies, frameworks, and coding languages. Working with a client is like being part of that company. You get treated as if you were part of the team with no differentiation. Some clients have Ideaware engineers, designers, quality assurance, and customer support, which is nice because it makes the company diverse and exciting.
What tools do you use to manage your projects?
We use Monday.com integrated with GitHub. Also, the whole company uses Monday.com and has access to their relevant workspaces, so we handle a significant part of the communication related to tasks, epics, and projects through this tool. To get more details, we discuss through Slack and keep the discussion asynchronous, but for deeper conversations, we may have quick syncs through Meets, Zoom, or even Slack Huddles.
What are your current challenges?
My main goal is to lead the construction of a new product that would take the business to a new level. So, my challenge is to fully understand how the industry works from a strategic perspective, help find opportunities that could leverage technology, build the team for it, and manage the whole process.
I’m working in a new industry where I don’t have previous experience, so I have a bunch of things to learn to meet my goal, which is very challenging but still fascinating and enriching. The best part is that all my teammates are always willing to teach me everything, encourage me to ask many questions, and cut me some slack to catch up.
What is the most exciting thing that has happened to you working at Ideaware?
I could mention a few, but I think the main ones for me have been parties organized by Ideaware and a trip to San Francisco sponsored by the client.
On the one hand, Ideaware recognizes the importance of human connections and encourages it in different ways; one of them is organizing fun parties and getting the whole company into an in-person experience.
On the other hand, in my case, the client wanted to meet me in person to help me build a stronger relationship with other leaders, so they invited me to spend a week working in San Francisco with other teammates from the US.
Thanks for reading. Please feel free to share and don’t forget to subscribe to our newsletter below!
I was introduced to software development during a coding Bootcamp in 2018......
I was introduced to software development during a coding Bootcamp in 2018. Over there, I was amazed by the simplicity of the Ruby on Rails framework, especially the MVC pattern. It was very simple: the model was in charge of communicating with the database and retrieving all the necessary data. The controller was responsible for responding to user actions, asking the model for the data the user was requesting, and sending it to the view, which was responsible for rendering what the user would see in the browser simple and plain HTML.
Suddenly how the web worked made more sense to me. After a couple of projects, I started to want more interactivity in my websites (after all, I was a user of other products, and the bar for quality was high). That was how I met JavaScript and AJAX. I was fortunate enough to learn JS in the ES6 era, so I was able to achieve most of the things I wanted with sprinkles of vanilla JavaScript. With time, it started to get messier. I learned about REST APIs and Single Page Applications frameworks, but I always felt that it was dirtier than the initial Ruby on Rails simplicity I was introduced to.
I never really dove into the SPA wave; I learned the basics, but for my entrepreneurial venture, Rails was more than enough. Nonetheless, after most of my startups failed, I had the opportunity to work with very talented people who knew the SPA business very deeply. I went through the migration from a Rails monolith to a decoupled Frontend and Backend architecture with them. I would love to say the migration went flawlessly, but I saw how the complexities of the new architecture affected everything from the product itself to the development and engineering process in a small team like ours.
I experienced firsthand the problems with cache when a new release was out and the user did not refresh the page: the slow performance of the first print due to the need to load a big chunk of JavaScript on the first request, the complexities of managing state, the increased difficulty of debugging issues between two codebases and the decrease in deployment speed. At the same time, frontend engineers waited for the backend engineers to perform a change. It was messy.
On the other hand, I was closely following Rails development. I saw how Rails stayed on the sidelines when frameworks like React and Vue started to emerge. Instead, they launched Turbolinks, but that did not go as well as expected. Lots of Jquery code broke. They kept working on it and improving it, and after that, they launched Stimulus JS, a great minimalistic framework that paired beautifully with Turbolinks (I adopted it very early on). But still, if you wanted a very interactive interface, you ended up writing way too much custom JavaScript. But at the end of 2020, things changed. With the release of Hotwire, the puzzle was completed. The best of the Rails framework without sacrificing any interactivity on your apps and all of this with 80% less Javascript writing than usual. Mind-blowing!
So how did they achieve such a fantastic result? The reality is that, as with any great innovation, the process was very iterative. All previous attempts of avoiding Javascript complexity finally paid off. To understand this, let’s try to explain the foundations of the most basic concepts of Hotwire:
Turbo: Turbolinks on Steroids
It is the heart of Hotwire; most of the magic is here. Turbo provides four complementary techniques that help you speed up page changes and form submissions without writing any custom JS, divide complex pages into components, and stream partial page updates over WebSockets.
Everything starts with Turbo Drive (AKA Turbolinks), which is an interceptor for all link clicks and form submissions, that instead of reloading the page, performs requests in the background and then replaces the body (and merges the head) of the document with the returning HTML from the server. This single approach increases the page-level navigation speed a lot because you do not have to reload all the assets, and the speed at which a browser can process HTML is very similar to the speed at which it can process JSON.
For the pages that you do not want to reload completely, you now have Turbo Frames. They work similar to Turbo Drive, but instead of replacing the complete body, you have the ability to encapsulate small parts of the document that can perform individual requests to the server and replace only the content of the matching frame. For example, you can replace an edit button with the form for editing, using the same template for the edit form as if you were visiting the editing page instead. An approach that works great with HTTP/2 and caching.
Turbo Frames are great when we work with direct interactions within a single frame, but what about when we need to update other parts of the pages outside of the Frames? Well, then we can use Turbo Streams. This technique allows us to stream HTML changes to any part of the page in response to updates sent over a WebSocket connection.
These three concepts can take you far, but sooner or later, you will need some customization, and that is what Stimulus JS is for.
Stimulus: A JS framework for the HTML you already have.
Stimulus allows you to connect JavaScript objects to elements on the page using simple annotations. You know that moment when you add a CSS class and the element magically changes position, shape, or color. For me, Stimulus is the same concept but with Javascript instead of CSS. You just need to link your existing HTML with it, and then the magic happens, no query selectors and no need to generate the whole DOM dynamically from Javascript itself.
The most exciting part of all is that it monitors the DOM for changes. For example, if a Turbo Stream was sent or the content of a Turbo Frame changes, the newly added HTML will connect with its corresponding Stimulus controllers and add the needed functionality immediately. No page reload is needed, and no need to listen for fancy Turbolinks load events.
Conclusion
The development of Hotwire and the enhancements coming to the Ruby on Rails framework are both exciting and refreshing. In a way, it feels like the first time I was introduced to the MVC pattern, a very simple yet powerful approach for building web (and now even mobile) applications without the well-known complexity of building a compelling SPA.
I encourage anyone reading this, to try these new technologies and rethink the need for complex front-end frameworks. Finally, another path can take us to the same level of interactivity with a fraction of the complexity. It is on us to keep fighting for a world with better frameworks and techniques.
I recommend this approach to you, especially if you have a small development team or you are an indie hacker building stuff on your own. I guarantee you can go 10x faster this way, and you will end up with much better user experiences because of this.
What do you think? Is the web going to be built with more or less JavaScript in the future?
Thanks for reading. Please feel free to share and don’t forget to subscribe to our newsletter below!
In this blog post, we will look at how to solve a simple problem with a Python program, and then we will try to speed...
In this blog post, we will look at how to solve a simple problem with a Python program, and then we will try to speed it up by using Python’s multiprocessing module.
The main goal of this post is to illustrate how a program can be made much faster by parallelizing work through multiple processes, as opposed to running the whole workload through a single process.
Problem introduction – TCP port scanning
The problem we will be trying to solve is known as TCP port scanning. The problem consists of finding open TCP ports in a given IP address. Such a process could be used by network administrators to identify potential risks in their networks and by attackers to attempt to gain control over exposed systems.
TCP ports are represented in 16 bits, so we have a maximum of 65535 ports per IP address. Port 0 is reserved and cannot be used, so we will focus on the range 1 to 65535.
Given a hostname such as www.google.com, a start-port, and an end-port our program will have to find the IP address of the given hostname and then print all open TCP ports in the given range.
In order to do this, we will have to make our program iterate through all ports from start-port to end-port and on each step attempt to establish a connection through the current port. If the connection can be successfully established we know the port is open, then we will print a message to let the user know.
A simple single-process solution
Let’s try to create a simple Python function using the socket module. This function takes an IP address and port number as inputs. It returns True if the port is open, and False otherwise.
The function is very simple. First, it wraps its calls inside a try/except block. It then tries to create a connection to the specified address and port. If this connection is successful it will immediately close it and return True, letting us know the port is open. If any problem occurs and the connection cannot be established it will return False, letting us know the port is closed.
The value of timeout=1 is needed to allow our program some time (1 second in this case) to establish the connection. If after 1 second our program can’t establish a connection we will assume the port is closed.
Now let’s wrap our function inside a complete program by reading some command-line arguments and printing appropriate messages. We will use argparse for argument parsing and time to measure execution time.
This is how our program works. When executed it reads the –hostname, –start-port and –end-port arguments. If a port range is not specified it will default to all ports, 1 to 65535. It then creates a variable called start_time to store the current timestamp in seconds, executes the scan_host function, and finally prints the elapsed time in seconds.
The scan_host function first translates the given hostname to an IP address, then iterates through all ports in the specified range and calls our initial is_port_open function for each port. If it finds an open port then it prints a message.
Let’s name our program port_scanner.py and save it.
Scanning 500 ports
Time to do some tests! Let’s see how long it takes to scan through 500 ports.
So our program works just fine and we were able to find two open ports. However, scanning 500 ports took 500 seconds. This is something we could have predicted given our 1-second timeout per connection attempt.
Given this, if we wanted to scan through all 65535 available ports, our program could take 65535 seconds to complete, or a little over 18 hours.
If we don’t have all day to portscan a single host, one thing is clear: our program must run faster.
One thing which comes to mind would be lowering our timeout value, but this could compromise accuracy. TCP connections need some time to establish, and not giving our program enough time could result in wrongly assuming some ports are closed when in reality could simply take some more time to accept a connection.
A better approach to speed up our program would be to try and connect to multiple ports at once, instead of trying a single port at a time. Fortunately, we can achieve this by parallelizing our workload across multiple processes. This is when multiprocessing comes to the rescue.
Speeding things up with Python’s multiprocessing
Python’s multiprocessing module provides a set of classes that allow to spawn subprocesses from a program’s main process. We will look at how we can use the Process class to speed up our port scanning program.
First, let’s modify our scan_host function to take a new workers argument and spawn a set of processes to divide the workload.
Let’s look at the different parts of our new function.
The new argument workers indicate how many subprocesses we want to launch. So given start_port and end_port we can calculate the total number of ports to scan and then divide this number by the amount of workers we will be launching:
At this point, we can iterate through our port range and compute the start_port and end_port of each one of our workers.
So to illustrate this with an example, if we give our program the following inputs:
Then the workers would be set up with the following arguments:
Now in each iteration, we can create a new instance of the Process class to spawn a new subprocess with the given arguments. We will then start the process and store it in our processes list.
When our workers launch they will call the function provided as the target argument of the Process constructor. In this case the function is scan_address. We will look at this function later.
Finally, we will call Process.join on each process to wait until they all finish.
Now, let’s put all pieces together into a new program.
Let’s name our new program port_scanner_parallel.py and save it.
Scanning 500 ports again
Now that we have what should be a much faster port scanner, let’s try scanning 500 ports again. This time we will launch 10 parallel workers.
As we can see, with 10 parallel workers we just gained a 10x improvement in execution time!
Last time, with a single process, it took 500 seconds to scan through 500 ports. Now with 10 parallel subprocesses, it takes only 50 seconds.
Scanning all ports of a host
Now that we have such a fast port scanner we can push things to the limit. Let’s try scanning all 65535 ports of a host with 100 parallel workers.
We were able to scan all ports of this host in just 131 seconds and have found four open ports.
Conclusion
We have looked at how to solve the TCP port scanning problem in Python. We initially looked at a simple single-process solution and then learned how to speed it up by using Python’s multiprocessing module.
We have learned how dividing the workload between a set of parallel workers can offer massive improvements in execution time.
Many computing problems can be parallelized like this, and now that you know how to use multiprocessing you have added a valuable tool to your toolbox. It is now up to you to apply it wisely.
Happy coding!
Thanks for reading. Please feel free to share and don’t forget to subscribe to our newsletter below!
Software projects are in a boom moment; any person working in a tech role has experienced this feeling of having...
Software projects are in a boom moment; any person working in a tech role has experienced this feeling of having selected the correct path. Not just because there is a high demand in this kind of job openings but because of the diversity of projects you can get involved in and the emerging wave of tools that you can take advantage of to keep improving and learning.
More than ever, we are involved in a changing world, which is reflected in software projects. Then, we as team members need to be agile and keep shifting/evolving faster. But how to keep up in a fast-paced environment and survive to tell the story?
From a Quality Assurance Analyst and Project Manager perspective, here are some tips that have worked for me so far:
1. Take advantage of the existing frameworks
We all have heard about SCRUM, LEAN, Kanban. Even if you are working specifically on one of these frameworks, always keep using the best of each one: the visibility and transparency that provides a Kanban board, the ability to identify and minimize the waste of time from LEAN, and the flexibility and continuous feedback of SCRUM, you could also use a Fishbone diagram to identify issues causes. The list goes on!
2. Transparency
Transparency is one of the SCRUM pillars that I consider incredibly relevant. It helps us to avoid suffering from micromanagement and misunderstandings. Make visible what you are working on and the status (use tasks, for instance, most of the boards allow you to create tasks under a User History/Card). So, any team member will be aware of the amount of work the team is carrying on and what each team member is doing. It helps to identify redundant and time-consuming activities as well. So, it looks like an opportunity for LEAN principles to identify the causes and improve/avoid wasting time.
3. Communication
In a fast-paced environment, it is easy to miscommunicate. Everything happening simultaneously, and many people trying to collaborate to make a better product could be overwhelming sometimes. Just breathe, get organized, do it quickly; find a way to iterate in the feedback, and make sure the whole team is 100% focused on what is happening with the product, events, deadlines, etc. That will make it easier to react smoothly to the constant changes.
4. Ask Questions and anticipate
Asking questions on a project has always been essential; I mean, someone has to ask the questions, right? Why don’t you?. It will lead to a better understanding for you and the whole team. When you anticipate the right questions, it clarifies the requirements, the insights needed, and the lacks.
Create a culture of clearing up doubts by asking questions. It is always better and time-saving to ask questions at the correct time, preferably at the beginning.
On a final note…
Everything in the tech world will keep changing, whether you are working on a huge product or in a bunch of small products. You are already involved, and this will not stop, so we better keep learning from the books and, more importantly, from experience. Always keep in mind the lessons learned, checklists, or any other helper that comes in handy for you and your team. It is a work in progress for me, and I guess for each one of us. So keep going! 💪🏽
Thank you for reading, and do not forget to share and subscribe to our newsletter below. If you have any questions about our processes, we are here for you. Contact us!
"A black box that does magic tricks " . Maybe that's the idea that many of us have about machine learning,...
“A black box that does magic tricks 🦄” . Maybe that’s the idea that many of us have about machine learning, especially if we have never had an approach to artificial intelligence. But the reality is that artificial intelligence is becoming more and more relevant in almost every branch of engineering and development, including the web.
But not everything has to be rocket science, right? So let’s take a look at some scenarios where machine learning could take our web applications to the next level 😉
Let’s analyze the data!
This is one of the machine learning applications that comes to mind most quickly: taking the large amount of data we collect and using specialized algorithms to discover patterns or inconsistencies. This analysis of the information can be used to make changes almost in real-time.
It’s time to understand user behavior! 👀
Your web application can use machine learning to accurately understand user behavior. For example, an e-commerce website can apply ML algorithms to monitor and understand a user’s affinity with a product or category. It could even predict expected user actions based on search history and interaction within the results page. Better results and more accurate recommendations can mean more sales and more time the user spends on the website.
Did you know that by using machine learning you could optimize your response times? That’s what the page forecasting model is all about: predicting the next page the user will visit using historical data from Google Analytics. Through this prediction, you can apply techniques to navigate faster.
Where is my 21st-century user experience? ⏳🔊🖖
Web technologies in the 21st century have already evolved to an impressive level. There are already several APIs based on artificial intelligence within browsers* that enable alternative and adaptive experiences.
One example of these technologies is the Web Speech API:
You can create applications that are voice-driven or that integrate voice recognition into forms or search boxes as Google or YouTube do.
The Google search box has integrated speech recognition provided by the browser.
Please note that several of these technologies are not fully supported by browsers. For example, Safari supports Speech Synthesis but does not support Speech Recognition.
But wait… audio isn’t everything. The camera can also be used to play/experiment with the user using ml5.js: “machine learning for the web in your web browser”. Through ml5.js we can use a variety of models. For example, PoseNet or Handpose, for real-time pose estimation (let’s play using our body!). The Coding Train has an introductory video that I recommend: ml5.js Pose Estimation with PoseNet.
Handpose in action ✊✋
Artificial intelligence is an exponentially growing trend. Every day we see it more and more in web development. Let’s take advantage of machine learning to make our application an unforgettable experience. Happy hacking!
Thank you for reading and do not forget to share and subscribe to our newsletter below. If you have any questions about our processes, we are here for you. Contact us!
We’ve all been there. Talking to a friend when suddenly you get that “Aha!” moment of a brand new idea for ...
We’ve all been there. Talking to a friend when suddenly you get that “Aha!” moment of a brand new idea for the next big mobile app. The current state of technology allows many people to dream big and start building software to solve a problem they’ve experienced or seen first hand.
However, this is no easy feat and if you want to jump on the boat of creating a web product, you’d better prepare yourself with some readings that will open your mind to a new world of opportunities.
Keep on reading to find out which books you should be buying next.
A Good Comparison First
A good analogy for building software is building a house. You plan, assign a budget, get advised, hire ideal people, and off you go.
On the other hand, building software is more flexible than a house.
Once you have the foundations and walls of your new house, changing the base structure will be very expensive and time-consuming.
Software is more malleable.
You won’t be able to modify base structure every time but there are techniques to handle the constant evolution that software projects face.
The most important thing about software being malleable is that you have to embrace projects with a very different mindset where constant change is a must and nothing is ever taken for granted.
You also have to consider a new way to handle things. To consider software as a new universe where things happen differently.
The following books are all about this mindset.
Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days
What I found best about this book is the approach it proposes. Sprint was written by Google Ventures members. The authors are people that have the money and power to hire people, pay for services, but that’s not the approach they want you to take.
Spending money is expensive, so Sprint wants you to do something better: validate your idea as many times and as fast as you can.
The book wants you to experiment.
Your awesome idea is awesome but you’ll face harsh reality once you take it to people to dissect it and give you down to earth feedback. First, validate your idea in a small experiment with the target group, collect feedback, iterate. By following this path your road to success will have a rock-solid foundation without guessing what your customers really need or want.
The words you want to learn and practice a lot are: experiment, prototype, validate, feedback loop, iteration.
In the end, you want to build a software product or service and, trust us, we’ve been doing it a handful of years, it’s not an easy two months thing.
Of course, you can throw everything away, do what you believe is best and create that incredible software without help from outsiders but as soon as you get it to the real world, you’ll see everything you missed and will probably find yourself thinking “if only I had asked…”
Getting Real: The smarter, faster, easier way to build a successful web application
This book is kind of a “bible” to me whenever I start a pet project.
Sadly, I read it for the first time too late after building failed projects at the personal level and company level.
This was the year the Lean methodology was booming and I got the chance to participate in a country-wide effort to get more people to build software. In this government initiative, several workshops were available to participants. In one of those workshops, here in Barranquilla, I learned about the MVP: Minimum Viable Product.
The instructor explained the MVP is the smallest version of your application that fulfills the customer needs. It was a mind-opening moment for me because I knew why I had failed before when doing my own software projects.
I tried to get those ideas to my boss at that time but with no help. The guy was stubborn and unfortunately, money was spent and lost. Project closed.
I can’t really recall how I found the book but reading it was such a good experience. It mentioned everything that went awry in that project, how we could’ve put it back on track, how to handle third party requests.
Third-party requests can kill a project.
We were working on a Learning Management System project for schools. Every time we leave a school after a meeting with the manager, a pile of requests would accumulate in our infinite backlog.
Several shortcuts and spaghetti code was written to handle so many specifics between schools. When we realized we couldn’t give everything to everyone, it was too late.
Getting Real is a book that will give actionable advice on how to avoid and not to fall into those traps.
When you’re starting your web software, you don’t need to wait a whole year to try it. You can plan small iterations and start testing with your friends, family, pals, people on the streets. Don’t fall into the trap of waiting for it to be “ready”with everything because it’ll be too late. Besides, in six months, you’ll have new ideas and the deadline will be moved or mismanaged because there’s a lot more to do.
Don’t do that. Go with the small iterations approach. Remember, Google Ventures employees advise it.
Also, a very good advice from Getting Real is “Less Mass”. Don’t get attached to “a hundred features”. That’s a sure way to fail.
Did Google start with Gmail, Hangouts, Drive, Cloud, Docs, Keep, Calendar, etc, etc? No, they didn’t. Google started as a search engine and grew from that.
Yeah, it’s nice to have a million features but it’s not worth it when you’re just starting. You’ll be losing a lot of time and money chasing the perfect app instead of delivering (and even better, charging users) early and often.
Shape Up: Stop Running in Circles and Ship Work that Matters
This is an awesome book. It explains how Basecamp (writers of previous and this book) work and the way they organize work to be done in a given period.
If you want to take advantage of the lessons in Shape Up, you need to prepare your mindset. If you’re new to software projects, it can play in your favor as you’re not so biased with other project styles such as scrum, kanban, waterfall, etc.
In summary, Shape Up wants you to do a great job defining what’s going to be done in the next six weeks. Leave all uncertainty behind, so that developers can go for it with less amount of doubts or unclear requirements.
By defining an overarching goal, you’ll let developers and designers figure how to reach it by themselves. No need to create story cards, tasks, subtasks. Just one goal. Let devs create their own tasks if they feel they need them or whatever methodology suits them best.
Normally, in a software project, there’s always something great that will pop up in the middle of an iteration. This is usually a “great” idea by someone in charge and all of a sudden they give it top priority because without it the product “would be useless”. That’s complete BS.
It’s not bad to have ideas. What is bad is to let them slip through your process. Shape Up (and also Getting Real) tells you to say “no” to that idea, at first. Reject the idea until several people or users are affected by the lack of it or even better, they suggest it.
Shape Up proposes a six weeks cycle because it’s a good amount of time to deliver something meaningful. Of course, this is not set in stone and you can test and find the best cycle for your team. What’s important is giving proper time to do serious research, validations, small iterations, and being able to deliver great and important work.
It’s not going to be six weeks doing “small things”. By no means, those six weeks will be spent doing the important work, delivering value to users. This usually means big tasks. Big releases.
Building software is an exciting journey. There are exciting, complicated problems to be solved, and a new way to help companies or people with their daily lives or routines. Software is very important in our daily lives and this is why building software requires better processes, better mindsets, and better ways to create them.
When building great software, the path and the destination must be great as well. Fortunately, there are awesome books to learn from experts and set yourself up for success.
Thanks for reading, hope you liked this article. Please feel free to share and don’t forget to subscribe to our newsletter below!
Many years ago, I started to develop my first application using VB.Net (Visual Basic). The idea was to control ...
Many years ago, I started to develop my first application using VB.Net (Visual Basic). The idea was to control the assets for a company. This app had an average of 10 forms approx. with many inputs, buttons, grids, among others. I was alone in the project and had to figure out everything on my own. Besides, I didn’t have any idea about how to organize the different components on each form. But when I was testing the app I realized that my experience as a user was much better if inputs, buttons, spaces and margins were well placed.
While developing the app I learned the importance of colors in a website. Being the only one working on the project forced me to work as the designer and complete review cycles as the QA. Throughout this process I understood that without a good UI, colors won’t have the same impact and the user will not have a good impression of the product.
Each time we build a product, we have a new opportunity to see the development process with other eyes, mainly the users’ ones. Focusing on their needs to solve the different issues in the simplest way possible will allow us to deliver the best experience for the end users.
As developers, we need to change our minds, acknowledging that we could avoid so many obstacles by just following a good UI/UX design pattern. And this is the catch: designers should always be present from the beginning to the end of the development process.
Designers are the ones in charge of leading the road to outstandings deliverables. Our role as developers is to assemble the ideas they put together in a canvas and turn them into UI components. However the way those components should be presented is a game changer decision: an application might work well but if it doesn’t look good it won’t sell. A solid experience in UX modeling and a good judgement for web interfaces are the key skills to successfully create a clean design.
Our main mission as developers is not to write code to get a salary, but understand the purpose of the UI elements and the way they work. Why? Because that will help us to know what the user needs and get a better perspective of what would be the best way of building them.
Designers can see deep in the functionality of each component because they have clearance on what the client is expecting in the different faces of the project. So it is always a good practice to ask for a well detailed explanation of the design’s structures and the way the components interact between each other. A good development strategy is always based on a good understanding of the project’s goals.
Another important reason that we need to keep in mind is that a product is a process that requires a considerable amount of phases before the final one. That’s the only way the users are going to have the best experience that adds real value to their life. Patience is a great ally on this whole journey.
Overall we just need to remind ourselves that creating an app is a magnificent journey full of hard decisions and complex issues. No project can be successful if there is no alliance between the design and development teams. Moreover is a learning path for both parties in which each one of the members can sharpen their skills by sharing their knowledge with each other in order to shape a high quality product.
Finally, having consistent design patterns and the best UX as possible, we allow users to understand how the application works more quickly and more efficiently.The UI is the guide for users throughout an application or software, using different elements such as fonts, color palettes, images and a whole world of animations and components. It isn’t a matter of making something great, but at the same time creating something useful.
At Ideaware, we’ve helped startup founders and fast-growing companies around the world “staff and scale” their software design and engineering teams. Our team is focused on hiring in the top 5% of developers and designers in Colombia.
If you need to build something special, we can help you to achieve it. Our talented team can bring life to your ideas. Contact Us
Since their inception, we’ve been told several times that Containers are better than Virtual Machines. Now...
Since their inception, we’ve been told several times that Containers are better than Virtual Machines. Now, I’m here to tell you they don’t. Docker Containers aren’t better or worse than Virtual Machines(VMs) but in my experience, the latter is much better. Let me tell you why.
Why Are They Useful?
First of all, let’s recall why these two virtualization tools are very convenient for developers and IT.
In software development teams, many times happens that a team member needs to install the software in a different OS than the application is run.
Software running on an Ubuntu Server might be develop in a MacOS computer. Installing stuff in MacOS is way different than in Ubuntu and this can cause trouble for developers when trying to run their applications in development mode.
This is where virtualization comes in play. You setup a virtual machine with all necessary dependencies for your software to run and then give the configuration files to developers and with a few commands you can have a proper development environment regardless the computer or operating system.
You now have portable and reproducible environment for many OS.
Docker Containers serve this purpose as well but they do it in a different and more performant way.
Docker Containers vs Virtual Machines
One of the main differences between these two kind of virtualization tools is that virtual machines might need more HDD space upfront, are slower to build up, and can be slower to boot up.
Containers consume less disk space (it depends), are faster to build up, and are faster to launch.
One could say containers are the best of the best. Tools such as Kubernetes might prove it right and definitely, they have a solid ecosystem and use case.
But what I see is that from the IT or DevOps perspective, they might be awesome but in Developer Experience, they’re not.
The following are the reasons why I believe Virtual Machines are better than Docker Containers. They’re mostly based on good personal experience in projects where servers were Virtual Machines and not so good projects deployed in a container solution.
Find Host or Container
With Virtual Machines, whenever you need to debug or test something in a cloud environment, you just need to ssh into a given IP address and that’s it.
When using Docker Containers you’d have to first get the IP address of the Docker Host and then find the specific container the application was deployed to.
If you have several Docker Hosts, and your app is deployed to several containers, then good luck finding the right container in the right host.
Of course, this can be solved with a script. Code (or have someone to do it for you) a script that:
Loops through the IP addresses
Run docker ps
Grep the output and look for the container ID
Continue until there’s a match
If there’s a match, run docker container exec
Do your debugging
What a PITA. Now, you have this script and you think you won’t have any more problems. Well, what about rotating IP addresses for security reasons?
This is why I think Virtual Machines are better than Docker Containers. In a cloud environment, accessing a virtual machine is WAY easier than just finding a given container.
File Uploading
Imagine you are handed a new feature to build. You have to upload files to a file storage. You think about your users so you set the upload to be run in the background because uploading a file might take some time.
You build it, test it, and try it locally. It works on your machine.
If you push this feature to a Virtual Machine deployed web app, well, it’s going to work fine. If you deploy it to a Docker Containers deployed web app, it won’t.
Why? Keep reading.
In the containerization world, one container regards one task, so that they’re small, reproducible, and fast.
With this in mind, your application container only regards running your code NOT running background jobs. So, if you have background jobs running in a different container, the file upload feature won’t work because the uploaded file won’t exist in the background job container.
Let me slow down:
App container: runs code and has its own file system.
Background job container: runs code in background way and has its own file system.
App container: receives uploaded file via form in web browser.
Background job container: picks file in the expected folder but it won’t exists because…
The key here is file system differences. The App container file system is not the same as the Background job container file system. When the App container receives the file, it is stored in a temporal path. As this path or folder is not in the Background job directories, the file won’t be found nor uploaded.
And now we have a situation. We either use some kind of Docker Volume or leave the feature as a synchronous one. In my case, I left the feature to work in a synchronous mode.
This is another reason I think Virtual Machines are better than Docker Containers.
Docker Alpine
Previously, I mentioned Docker Containers are a good alternative to Virtual Machines because they consume less disk space. Well, this might not be 100% true.
It happens that before a container you need a Docker Image. A Docker Image is like a base artifact that describes all the things the container will have when is run. With a Docker Image you indicate the container operating system, installed software, environment variables, configured files, and command to execute.
Similar to Virtual Machines images, Docker Images consume disk space. If you’re not careful enough you’ll end up using all your HDD space. You have be mindful about the base image you use to build your images, what dependencies are downloaded, and know a few key points to use less disk space as possible when building your images.
In the end, for the developer who only wants a portable environment, this make no difference than using a virtual machine.
This is when Docker Alpine steps in. What’s Docker Alpine? It’s a Docker Image that has all important stuff to run Linux and leaves out everything that is no strictly required.
By using Alpine, you can really create slim Docker Images. It brings the benefit that your images will build faster, your containers will be build much faster, and you’ll use less disk space.
Of course, it comes with its own set of problems. I experienced one of them.
Generating PDF Files with WKHTMLTOPDF
WKHTMLTOPDF is a tool to generate PDF file out of HTML content. It’s really useful because you can reuse HTML files and their styling. PDF generation is a complicated domain and WKHTMLTOPDF helps a lot to simplify.
If your software is installed in a Ubuntu server, you’ll be more than fine as many of the WKHTMLTOPDF dependencies are already available in the OS. However, when your software is being deployed to a Docker Container based on Docker Alpine, you’ll run into problems.
In this situation, Docker Alpine is going to be troublesome. Nothing big but it’s bothersome.
In the end, to solve this issue with Docker Alpine I had to read through GitHub issues, read more, and try many options to see what worked and what didn’t. In order to make the tool work in the Docker Containers I had to installed several missing dependencies, more dependencies, and finally WKHTMLTOPDF.
All that trouble could’ve been avoided with Virtual Machines. It wasn’t the first time I used WKHTMLTOPDF. I even have a set of scripts to installed it. They’ve always worked in Ubuntu operating systems. They didn’t work in Docker Alpine.
Not that I’m saying that Docker Alpine is bad. It’s just very different and might cause trouble. But this is also a point I think Virtual Machines are better than Docker Containers.
You might think to yourselves these points are only valid to me because they’re personal experiences. You’re right.
I’m not saying Virtual Machines will always be better than Docker Containers. The message here is that VMs have been great all those times I used and needed them. Containers are cool, of course, are performant, and small. Sure. But in regards of Developer Experience, Virtual Machines have still many good stuff to offer.
From my viewpoint Virtual Machines > Docker Containers.
So, you are done building your app and deployed it to production, but you have no idea if people are...
So, you are done building your app and deployed it to production, but you have no idea if people are actually using it or how are they interacting with it. What features do most people use? What are, if any, the chokepoints in your app? Is your app even up right now? You just checked, didn’t you? What about in 3hrs when you are at dinner? Luckily your app can answer these things for you with the proper tools.
For starters, why SHOULD you monitor your app? Well, one simple reason: Users aren’t patient. If you have a really slow app or it just isn’t working then people will leave. Users who leave are users who aren’t paying. You are literally losing revenue by not monitoring and seeing that your app is in top condition: Users will stay in your app if they don’t have to pull their hair out to use it.
Performance
Monitoring an app involves knowing how well it is performing. There are various metrics that can be used to judge the overall performance of your Backend such as throughput of endpoints (how many requests to that endpoint come through a period of time), time consumption of each request and DB usage.
By rule of thumb, it is easier to order your endpoints by highest throughput and work your way down seeing how well they behave and watch out if there is anything you can optimize. If you target the most used endpoints first you will be having a higher impact on your user base and any subsequent improvement will have a higher repercussion. Once you start associating the highest throughput endpoints to features, you can have a very good picture of what the average user is interacting with inside your app.
After you’ve gone through the most used endpoints, it’s time to tackle the slowest overall. Even though these MIGHT not be used that much, on average they are very slow so any request will be on the slow side of your site and frustrate any user who is unfortunate enough to have to use it. It’s very likely that the slowest ones might also have the highest DB usage so try and check them out for the usual suspects of n+1 queries or missing indexes.
It’s important after you have optimized your backends endpoints as much as you can to continue monitoring them monthly seeing how they behave. Correlate how much your user base growth is affecting the overall performance on each one of your endpoints. Everything might work fine and dandy with ten users but at a hundred, thousand, ten thousand users its likely to start seeing some degradations in performance. This will give you a better heads up if in the long run it would be better to adapt your current solution to something else.
Error Tracking
You finally have your super speedy app and everything is done at the speed of light, but it would suck if most of that are just errors and buggy interactions. Most users won’t report errors but rather just close the tab and move on to find something else. That’s why it’s also important to have an error tracking tool, something that alerts you to errors your app is having in production. Fortunately, many tools exist such as Sentry or Rollbar which track errors across your applications stack.
It’s important to address errors as quickly as possible to avoid further complications on your users. Most of the tools used for error tracking provide extra context and information so it’s easier to reproduce the error in your local environment and debug it.
Availability
The other side of monitoring involves knowing if your application is up and running smoothly. Since most applications use a variety of services to function its ideal to have multiple alarms checking various metrics to correctly determine if something is down or not running as expected, some examples are:
A periodic simple ping to your Backend/Frontend service might also be a very useful method of determining if your application is available.
If you have a stable throughput of requests, a significantly lower number can be a clear warning sign that something is wrong and some actions need to be taken.
Establishing a borderline Database CPU usage, in case the issue at hand might belong to the Database.
These are just a few guidelines that can be used to make sure your web app is in top condition and running smoothly. It’ll save you lots of headaches to know how well it’s performing so you can act quickly and decisively when making architecture/software scaling decisions. Establishing certain metrics to act as alarms when something is amiss is also crucial for maintaining a high availability.
Serverless seems to be the hype word for applications right now. Every cloud platform from AWS to Azure...
Serverless seems to be the hype word for applications right now. Every cloud platform from AWS to Azure to IBM is listing serverless services and spreading its pros everywhere. However, before we get into serverless applications we need to understand: What is being serverless about? Are we truly going around without servers at all?
The serverless computing model offers to focus on the business side instead of the infrastructure. The majority of platforms offer serverless architectures that are cheap and easy to maintain. Serverless architectures are often based on Functions as a Service (FaaS), deploying only a piece of business logic in the form of a function. Some examples of these functions are AWS Lambda or Google Cloud Functions. Implementing the serverless model is very attractive because it means less time getting lost on the implementation of complex architecture. But there’s more to this than meets the eye.
First of all, serverless does not mean getting rid of servers. It just means that the cloud vendor will manage the allocation and provisioning of servers in a dynamic way, so your application can run on stateless computers triggered by an event. Every time a function is called, the cloud vendor manages to assign a compute container for that specific execution. By doing this, the vendor prices the services based on the number of executions rather than computing capacity.
Until now, going serverless should seem easy as a piece of cake, but not everything about the infrastructure should be left to the cloud vendor. For that reason, here are 5 tips for building a serverless application smoothly:
1. Be aware of the use case:
With the advent of serverless, long gone are those days where we had to spend a lot of time and resources every time we wanted to launch to production. On serverless, we don’t have to worry about load balancing and orchestration anymore because it is outsourced now. However, the serverless computing model doesn’t work for every use case. For example, taking into account that on AWS Lambda every function should get executed on a window of maximal 15 minutes, we know beforehand that long-running jobs won’t work in serverless.
Also, if you can’t predict how much resources like disk space and memory your application is going to use, serverless services won’t be the best approach since they have limitations on that per their nature.
2. Use IaaC as much as possible
In the serverless computing model, there is no server administration as we know it, but it doesn’t mean that it is completely no-ops. There is indeed a need to set up and deploy a serverless function, allocate resources, time out, environment variables, etc. Doing that is kind of tedious, and more for developers not used to managing infrastructure. Don’t worry, though, Infrastructure as Code (IaaC) is the solution for that.
Terraform, Ansible, Cloudformation and others are around to help you convert infrastructure issues into code that you can copy, paste, test and even share. Defining every one of them would be a matter of another post, but you can always rely on their wonderful documentation. Last but not least, there are also agnostic frameworks to deploy serverless code on several cloud platforms, like Serverless (Javascript, Python, Golang), Apex and Up.
3. Keep your bundle as small as possible
Even though serverless cloud platforms support a lot of languages and frameworks, we should keep in mind that anything resource-hungry doesn’t work on serverless. For that reason, the advice is to keep everything as small as possible, since serverless applications are meant to be lightweight.
For the use of dependencies, it depends on the used language and its version whether it’s a challenge or not to implement them. In the case of Javascript and NodeJS, they bring a lot of native dependencies making this easier, but that’s not the case for C, to give an example.
Just remember that serverless functions are limited on disk space and memory, because of that the fewer dependencies a function has, the better performance it has on this model.
4. Keep in mind that serverless functions don’t have IPs
Since serverless functions are servers allocated dynamically, they don’t have IPs. That’s important to remember whenever you are accessing third party APIs through VPNs. If you need to access a private endpoint and the only authorization possible is through whitelisted IPs there are ways to work around this on serverless applications.
At least on AWS Lambda, you can put the functions inside a security group inside a VPN, which is connected to an Elastic IP in order for them to have a static IP.
5. Don’t forget Serverless also means stateless
Last but not least, you cannot forget that serverless functions are stateless, even though they might store a certain cache. Two executions of the same function can run on two different computing containers, for that reason you can’t just store data on disk space.
If storing data gets necessary, you can always rely on external services such as databases or file storage services, such as S3 and DynamoDB on AWS.