Smart Sessions – Finally a (kinda) window-centric session manager

This is a short post about some software functionality that I’ve long wanted, and a browser extension that gets most of it well enough.

The dream

There’s a simple piece of software that I’ve wanted for several years. Few months, I go on a binge of trying to find something that do what I’m looking for.

Basically: a session manager that allows you to group windows together somehow, so that you can close and save them all with one click, and then reopen them all with one click.

I make heavy use of the OSX feature “desktops”, which allows multiple separate workspaces in parallel. I’ll typically have a desktop for my logging and tracking, one for chats and coms, one with open blog posts, one with writing projects, one with an open coding project, etc. Each of these are separate contexts that I can switch between for doing different kinds of work.

What I want is to be able to easily save each of those contexts, and easily re-open them later.

But since I’ll often have multiple sessions open at the same time, across multiple desktops, I don’t want the session-saver app to save all my windows. Just the ones that are part of a given workspace context.

The best way to do this is if the software could tell which windows were open on which desktops and use that as the discriminator. But some sort of manual drag and drop for adding a (representation of) a window (on a list of windows) to a group would work too.

The situation

This seems to me like something that…there should be a lot of demand for? I think lots of people have many windows, related to different projects that they want to keep separate, open on their computer at the same time.

But, as near as I can tell there’s almost nothing like this.

There are a lot of session managers, browser extensions that allow you to save your tabs for the future (I’ve mostly used OneTab, but there are dozens). However, they’re virtually all tab-centric. A “session” typically refers to a single window, with multiple tabs, not to multiple windows, with multiple tabs each, which means that to reopen a session (in my “multiple windows” sense of the word), I need to mentally keep track of which windows were included and open all of them one by one, instead of clicking one button to get the context back.

There are a few session managers that save multiple windows in a session (I’m thinking of Session Buddy or the less polished Tab Session Manager), but these have the opposite problem: they save all the open windows, including those that are part of other workflows in other desktops, which means that I have to go through and manually remove them every time I save a session. (This is especially a problem for me because there’s a set of windows that I always keep open on my first desktop.) And on top of that, they tend to save sessions as static snapshots, rather than as mutable objects that change as you work with them, so you need to repeatedly delete old sessions and replace them with updated ones.

Success!

I spent a few hours over the past week, yet again, reading about and trying a bunch of tab managers in rapid succession to find any that have anything like the functionality I’m wanting.

I finally found exactly one that does what I want!

It is a little finicky, with a bunch of small to medium sized UX problems. But it is good enough that I’m going ahead and making a point to try using it.

I’m sharing this here because maybe other people have also been wanting this functionality, and they can benefit from fruits of my laborious searching.

Current solution: Smart Sessions – Tab Manager

Smart Sessions is a chrome extension that does let you save groups of windows. This is the best one that I’ve found so far.

When you click on the icon, there’s a button for creating a new session. When you click it, it displays a list of all your current open tabs (another button organizes all those tabs by window), with checkboxes. The user checks the windows that they want to be included in a session. You give it a name and then create the session.

While a session is active (and while a default setting called “auto save” is set to Yes), when you close a tab or a window, it removes that tab or window from the session (though it does create a weird popup every time). You can also remove tabs/windows from the list manually.


The weird popup. It’s not super clear from the text what the options mean, but I think “stop tracking” deactivates the session, and “save” removes the window you just closed from the active session.

You can press the stop button, which closes all the windows, to be reopened later.

When the session is inactive, you can edit the list of tabs and windows that compose a session, removing some (though I think not adding?). You can also right click on any page, select Smart Sessions, and add that page to any session, active or not.

At the bottom of the session list, there’s a button that deletes the session.

This basically has functionality that I want! 

I want to first and foremost give a big hurrah to the developer Serge (Russo?), for being the only person in the world to make what seems to me and obvious and extremely helpful tab-management tool. Thank you Serge!

Some issues or weird behavior

However, it still has a fundamentally tab-centric design, with multi-window sessions seeming like concessions or afterthoughts, rather than core to the user experience. This results in some weird functionality. 

  • Every time you create a new session, you need to click a button so that the selection list is separated by window, instead of only a list of tabs. If you don’t click this button, the selection list is a flat list of tabs, and when you create the session, all the selected tabs will be combined into a single window.
    • (One UX that I could imagine is having a global setting on the settings page, “tab-centric default” vs “window-centric default”. You can still press the button to toggle individual sessions, but for window-centric session users, having a default would save me a button press each time.
  • I think as a side effect of the above feature, whenever you create a new session, it takes all the windows of that session (regardless of where they are on the screen or across different desktops) and stacks them all on top of each other, so that only the top one is visible (not even some overlap so you can see how many windows are stacked on top of each other). 
  • It would be intuitive if, while a session is active, if you opened a new window, that window was automatically added to the session. Not only does that not work, there appears to be no way to add new windows to a session. New tabs get added to the session, but not new windows. The right-click “add to session functionality”, adds a single page, as a new tab in one of the windows of a session, not as a new window in that session.
    • The only way, as near as I can find, to increase the number of windows in a session is to drag a tab from a multi-tab window into its own window—both resulting windows are saved as part of the session. In order to add new windows to a session, the user needs to do an an awkward maneuver to exploit this functionality: first create a new tab in a window that’s part of the session, and then drag it to it’s own window. Or alternatively, make a new window, add that window as a tab, to one of the windows that is part of the active session, and then drag it out again. That tab, again in its own window, will be added to the session.
  • As noted above, every time you close a window, that’s part of the active session, this activates a popup.

It would be great if these issues were addressed.

Additionally, for  some reason the extension is slow to load. Sometimes (but not always), I’ll click on the icon and it will take a full two seconds for the list of sessions to appear. I haven’t yet figured out what the pattern is for why there’s sometimes a delay and sometimes not.

And finally, there are some worrying reviews that suggest that at least sometimes, the whole history disappears? I’m not sure what’s up with that, but I’m going to make a point to regularly export all my sessions (there’s easy export functionality), just to be careful.

Overall though, this so far works, and I feel pretty excited about it.

Very low energy states seem to “contain” “compressed” fear or despair.

[I wrote this elsewhere, but wanted it someplace where I could link to it in isolation.]

When I’m feeling very low, I can often do Focusing and bring up exactly the concern that feels hopeless or threatened.

When I feel into fear “underneath” the low energy state, the fear (which I was ignoring or repressing a moment ago), sort of inverts, and comes blaring into body-awareness as panic or anxiety, and my energy comes back. Usually, from there, I can act from the anxiety, moving to take action on the relevant underhanded concern.

[Example in my logs on April 10]

When I feel into the low energy, and there’s despair underneath, usually the thing thing that needs to happen is move that is like “letting reality in” (this has a visual of a vesicle, with some substance inside of it, which when the membrane is popped defuses and equalizes with the surrounding environment) or grieving. Usually after I do that, my energy returns.

(Notably there seems to be an element of hiding from or ignoring or repressing what’s true in each of these.)

In both cases, it sort of feels like the low energy state is the compacted from of the fear or despair, like snow that has been crushed solid. And then in doing the Focusing, I allow it to decompress.

Rough Thoughts on the White House Executive order on AI

I spent a few hours reading, and parsing out, sections 4 and 5 of the recent White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

The following are my rough notes on each subsection in those two subsections, summarizing what I understand each to mean, and my personal thoughts.

My high level thoughts are at the bottom.

Section by section

Section 4 – Ensuring the Safety and Security of AI Technology.

4.1

  • Summary:
    • The secretary of commerce and NIST are going to develop guidelines and best practices for AI systems.
    • In particular:
      • “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
        • What does this literally mean? Does this allocate funding towards research to develop these benchmarks? What will concretely happen in the world as a result of this initiative?
    • It also calls for the establishment of guidelines for conducting red-teaming.
      • [[quote]]
        • (ii)  Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.  These efforts shall include:
          •  (A)  coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and
          • (B)  in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order.
  • Commentary:
    • I imagine that these standards and guidelines are going to be mostly fake.
    • Are there real guidelines somewhere in the world? What process leads to real guidelines?

4.2

  • Summary:
    • a
      • Anyone who has or wants to train a foundation model, needs to
        • Report their training plans and safeguards.
        • Report who has access to the model weights, and the cybersecurity protecting them
        • The results of red-teaming on those models, and what they did to meet the safety bars
      • Anyone with a big enough computing cluster needs to report that they have it.
    • b
      • The Secretary of Commerce (and some associated agencies) will make (and continually update) some standards for models and computer clusters that are subject to the above reporting requirements. But for the time being,
        • Any models that were trained with more than 10^26 flops
        • Any models that are trained primarily on biology data and trained using greater than 10^23 flops
        • Any datacenter that connected with greater than 100 gigabits per second
        • Any datacenter that can train an AI at 10^20 flops
    • c
      • I don’t know what this subsection is about. Something about protection cyber security for “United States Infrastructure as a Service” products.
      • This includes some tracking of when foreigners want to use US AI systems in ways that might pose a cyber-security risk, using standards identical to the ones laid out above.
    • d
      • More stuff about IaaS, and verifying the identity of foreigners.
  • Thoughts:
    • Do those numbers add up? It seems like if you’re worried about models that were trained on 10^26 flops in total, you should be worried about much smaller training speed thresholds than 10^20 flops per second? 10^19 flops per second, would allow you to train a 10^26 model in 115 days, e.g. about 4 months. Those standards don’t seem consistent.
    • What do I think about this overall?
      • I mean, I guess reporting this stuff to the government is a good stepping stone for more radical action, but it depends on what the government decides to do with the reported info.
      • The thresholds match those that I’ve seen in strategy documents of people that I respect, so that that seems promising. My understanding is that 10^26 FLOPS is about 1-2 orders of magnitude larger than our current biggest models.
      • The interest in red-teaming is promising, but again it depends on the implementation details.
        • I’m very curious about “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”
          • What will concretely happen in the world as a result of “an initiative”? Does that mean allocating funding to orgs doing this kind of work? Does it mean setting up some kind of government agency like NIST to…invent benchmarks?

4.3

  • Summary:
    • They want to protect against AI cyber-security attacks. Mostly this entails government agencies issuing reports.
      • a – Some actions aimed at protecting “critical infrastructure” (whatever that means).
        • Heads of major agencies need to provide an annual report to the Secretary of Homeland security on potential ways that AIs open vulnerabilities to critical infrastructure in their purview.
        • “…The Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.”
        • Government orgs will incorporate some new guidelines.
        • The secretary of homeland security will work with government agencies to mandate guidelines.
        • Homeland security will make an advisory committee to “provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.”
      • b – Using AI to improve cybersecurity
        • One piece that is interesting in that: “the Secretary of Defense and the Secretary of Homeland Security shall…each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks”, and then report on their results
  • Commentary
    • This is mostly about issuing reports, and guidelines. I have little idea if any of that is real or if this is just an expansion of lost-purpose bureaucracy. My guess is that there will be few people in the systems that have inside views that allow them to write good guidelines for their domains of responsibility regarding AI, and mostly these reports will be epistemically conservative and defensible, with a lot of “X is possibly a risk” where the authors have large uncertainty about how large the risk is.
    • Trying to use AI to improve cyber security sure is interesting. I hope that they can pull that off. It seems like one of the things that ~ needs to happen for the world to end up in a good equilibrium is for computer security to get a lot better. Otherwise anyone developing a powerful model will have the weights stolen, and there’s a really vulnerable vector of attack for not-even-very-capable AI systems. I think the best hope for that is using our AI systems to shore up computer security defense, and hope that at higher-than-human levels of competence, cyber warfare is not so offense-dominant. (As an example, someone suggested maybe using AI to write a secure successor to C, and the using AI to “swap out” the lower layers of our computing stacks with that more secure low level language.)
      • Could that possibly happen in government? I generally expect that private companies would be way more competent at this kind of technical research, but maybe the NSA is a notable and important exception? If they’re able to stay ten years ahead in cryptography, maybe they can stay 10 years ahead in AI cyberdefense.
        • This raises the question, what advantage allows the NSA to stay 10 years ahead? I assume that it is a combination of being able to recruit top talent, and that there are things that they are allowed to do that would be illegal for anyone else. But I don’t actually know if that’s true.

4.4 – For reducing AI-mediated CHEMICAL, BIOLOGICAL, RADIOLOGICAL, AND NUCLEAR threats, focusing on biological weapons in particular.

  • Summary:
    • a
      • The Secretary of Homeland Security (with help from other executive departments) will “evaluate” the potential of AI to both increase and to defend against these threats. This entails talking with experts and then submitting a report to the president.
      • In particular, it orders the Secretary of Defense (with the help of some other governmental agencies) to conduct a study that “assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks”, evaluates the risks associated with the biology datasets used to train such systems, assesses ways to use AI to reduces biosecurity risks.
    • b – Specifically to reduce risks from synthetic DNA and RNA.
      • The office of science and technology policy (with the help of other executive departments) are going to develop a “framework” for synthetic DNA/RNA companies to “implement procurement and screening mechanisms”. This entails developing “criteria and mechanisms” for identifying dangerous nucleotide sequences, and establishing mechanism for doing at-scale screening of synthetic nucleotides.
      • Once such a framework is in place, all (government?) funding agencies that fund life science research will make compliance with that framework a condition of funding.
      • All of this, once set up, needs to be evaluated and stress tested, and then a report sent to the relevant agencies.
  • Commentary:
    • The part about setting up a framework for mandatory screening of nucleotide sequences, seems non-fake. Or at least it is doing more than commissioning assessments and reports.
      • And it seems like a great idea to me! Even aside from AI concerns, my understanding is that the manufacture synthetic DNA is one major vector of biorisk. If you can effectively identify dangerous nucleotide sequences (and that is the part that seems most suspicious to me), this is one of the few obvious places to enforce strong legal requirements. These are not (yet) legal requirements, but making this a condition of funding seems like a great step.

4.5

  • Summary
    • Aims to increase the general ability for identifying AI generated content, and mark all Federal AI generated content as such.
    • a
      • The secretary of commerce will produce a report on the current and likely-future methods for, authenticating non-AI content, identifying AI content, watermarking AI content, preventing AI systems from “producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual)”
    • b
      • Using that report, the Secretary of Commerce will develop guidelines for detecting and authenticating AI content.
    • c
      • Those guidelines will be issued to relevant federal agencies
    • d
      • Possibly those guidelines will be folded into the Federal Acquisitions Regulation (whatever that is)
  • Commentary
    • Seems generally good to be able to distinguish between AI generated material and non-AI generated material. I’m not sure if this process will turn up anything real that meaningfully impacts anyone’s experience of communications from the government.

4.6

  • Summary
    • The Secretary of Commerce is responsible for running a “consultation process on potential risks, benefits, other implications” of open source foundation models, and then for submitting a report to the president on the results.
  • Commentary
    • More assessments and reports.
    • This does tell me that someone in the executive department has gotten the memo that open source models mean that it is easy to remove the safeguards that companies try to put in them.

4.7

  • Summary
    • Some stuff about federal data that might be used to train AI Systems. It seems like they want to restrict the data that might enable CBRN weapons or cyberattacks, but otherwise make the data public?
  • Commentary
    • I think I don’t care very much about this?

4.8

  • Summary
    • This orders a National Security Memorandum on AI to be submitted to the president. This memorandum is supposed to “provide guidance to the Department of Defense, other relevant agencies”
  • Commentary:
    • I don’t think that I care about this?

Section 5 – Promoting Innovation and Competition.

5.1 – Attracting AI Talent to the United States.

  • Summary
    • This looks like a bunch of stuff to make it easier for foreign workers with AI relevant expertise to get visas, and to otherwise make it easy for them to come to, live in, work in, and stay in, the US.
  • Commentary
    • I don’t know the sign of this.
    • Do we want AI talent to be concentrated in one country?
      • On the one hand that seems like it accelerates timelines some, especially if there are 99.9% top tier AI researchers that wouldn’t otherwise be able to get visas, but who can now work at OpenAI. (It would surprise me if this is the case? Those people should all be able to get O1 visas, right?)
      • On the other hand, the more AI talent is concentrated in one country the smaller jurisdiction of the regulatory regime that slows down AI. If enough of the AI talent is in the US, regulations that slow down AI development in the US only have a substantial impact, at least the the short term, before that talent moves, but maybe also in the long term, if researchers care more about continuing to live in the US than they do about making cutting edge AI progress.

5.2

  • Summary
    • a –
      • The director of the NSF will do a bunch of things to spur AI research.
        • …”launch a pilot program implementing the National AI Research Resource (NAIRR)”. This is evidently something that is intended to boost AI research, but I’m not clear on what it is or what it does.
        • …”fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs.”
        • …”establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order.”
    • b –
      • The Secretary of Energy will make a pilot program for training AI scientists.
    • c –
      • Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office will sort out how generative AI should impact patents, and issue guidance. There will be some similar stuff for copyright.
    • d –
      • Secretary of Homeland Security “shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks”
    • e –
      • The HHS will prioritize grant-making to AI initiatives.
    • f –
      • Something for the veterans.
    • g –
      • Something for climate change
  • Commentary
    • Again. I don’t know how fake this is. My guess is not that fake? There will be a bunch of funding for AI stuff, from the public sector, in the next two years.
    • Most of this seems like random political stuff.

5.3 – Promoting Competition.

  • Summary
    • a –
      • The heads of various departments are supposed to promote competition in AI, including in the inputs to AI (NVIDIA)?
    • b
      • The Secretary of Commerce is going to incentivize competition in the semi-conductor industry, via a bunch of methods including
        • “implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem”
        • mentorship programs
        • Increasing the resources available to startups (including datasets)
        • Increasing the funding to R&D for superconductors
    • c – The Administrator of the Small Business Administration will support small businesses innovating and commercializing AI
    • d
  • Commentary
    • This is a lot of stuff. I don’t know that any of it will really impact how many major players there are at the frontier of AI in 2 years.
    • My guess is probably not much. I don’t think the government knows to to create NVIDAs or OpenAIs.
    • What the government can do is break up monopolies, but they’re not doing that here.

My high level takeaways

Mostly, this executive order doesn’t seem to push for much object-level action. Mostly it orders a bunch of assessments to be done, and reports on those assessments to be written, and then passed up to the president.

My best guess is that this is basically an improvement?

I expect something like the following to happen:

  • The relevant department heads talk with a bunch of experts. 
  • The write up very epistemically conservative reports in which they say “we’re pretty sure that our current models in early 2024 can’t help with making bioweapons, but we don’t know (and can’t really know) what capabilities future systems will have, and therefore can’t really know what risk they’ll pose.”
  • The sitting president will then be weighing those unknown levels of national security risks against obvious economic gains and competition with China.

In general, this executive order means that the Executive branch is paying attention. That seems, for now, pretty good. 

(Though I do remember in 2015 how excited and optimistic people in the rationality community were about Elon Musk, “paying attention”, and that ended with him founding OpenAI, what many of those folks consider to be the worst thing that anyone had ever done to date. FTX looked like a huge success worthy of pride, until it turned out that it was a damaging and unethical fraud. I’ve become much more circumspect about which things are wins, especially wins of the form “powerful people are paying attention”.)